
Automated Systems are AI
Human culture varies from place to place on Planet Earth. Some cultures dominate. One of these dominant cultures tends to disrespect less dominant cultures. This leads to a kind of homogenization resulting in a predictable monoculture. For those who understand the importance of biodiversity for health and wellbeing, it is easy to understand the dangers of monoculture. What has this got to do with Automated Systems? A LOT.
Automated Systems like Chat GPT are often hyped as the emergence of General Artificial Intelligence. This hyperbole harms industry and society because it is not true. Yes processors are extremely fast. Some can do trillions of calculations a second. Does it make processors intelligent? No. Fast? Yes. Can they get faster? Probably. This is great for automated systems like self driving cars and the factories that build them. It is great for navigating reusable rockets to land themselves safely. But speed is often confused with intelligence and hyperbole is often confused with facts.
The Promise of High Speed Computing?
Better health and wellbeing for each other and all life on our planet? Hopefully. But the obstacle to this is human criminality. Criminality is organized now and it seems unstoppable. It has infiltrated governments all over our world with a moral framework based more on the criminal code than ethical democratic principles. Old cliche phrases like “world domination”, “power and glory”, “winner take all” tend to find their way into sales literature as if these are things healthy human beings want. Of course a lot of this is done through deception and illusion because “there is a sucker born every minute”. Now with open access to high speed computing, criminals are having a heyday. This is the problem facing humanity. It is not artificially intelligent machines that will wipe out humanity and all life on Earth. It is us. Can high speed computing be used to intervene?
Chat GPT Responds:
High-speed computing can be a valuable tool in combating criminal activities in various ways. Here are some ways in which high-speed computing can be utilized to help stop criminals:
- Data Analysis and Pattern Recognition: High-speed computing can process vast amounts of data quickly, allowing law enforcement agencies to analyze and identify patterns related to criminal activities. By applying advanced algorithms and machine learning techniques, computers can detect correlations and anomalies in data, helping to identify potential criminal networks, trends, or suspicious activities.
- Video Surveillance and Facial Recognition: High-speed computing enables real-time video analysis, allowing for efficient monitoring of surveillance camera feeds. By integrating facial recognition algorithms, computers can compare captured faces against criminal databases, instantly flagging individuals with criminal records or those on watchlists. This technology can help identify suspects, locate missing persons, or prevent crimes in progress.
- Predictive Analytics: By leveraging high-speed computing and analyzing large volumes of data, predictive analytics can help law enforcement agencies anticipate and prevent criminal activities. By considering various factors such as historical crime data, social media analysis, and environmental conditions, predictive models can identify high-risk areas and times for criminal activities, allowing law enforcement to allocate resources more effectively.
- Cybersecurity and Threat Detection: High-speed computing plays a crucial role in detecting and preventing cybercrime. Advanced algorithms can continuously analyze network traffic, identifying suspicious patterns or anomalies that may indicate hacking attempts, malware infections, or data breaches. Real-time threat intelligence coupled with high-speed computing enables swift response and mitigation of cyber threats.
- Forensic Analysis: High-speed computing facilitates rapid analysis of forensic evidence, such as DNA profiling, fingerprint matching, or ballistics analysis. By processing large datasets quickly, computers can help identify matches or connections between evidence and known criminals, supporting investigations and linking cases that may otherwise go unnoticed.
- Encryption and Cryptanalysis: Criminals often use encryption to secure their communications and hide their activities. High-speed computing can be used in cryptographic analysis to break encryption algorithms, uncovering vital information that may assist in criminal investigations. Conversely, high-speed computing can also be used by law enforcement agencies to develop robust encryption methods to protect sensitive data from criminals.
It’s important to note that the use of high-speed computing in law enforcement should be accompanied by proper privacy safeguards, ethical considerations, and oversight to ensure that the technology is used responsibly and within legal boundaries.
But What Are Privacy Safeguards, Ethical Considerations and Oversights? Chat GPT Responds.
The deployment of high-speed automated systems raises important privacy safeguards, ethical considerations, and the need for oversight to protect individuals and ensure responsible use. Here are some key aspects to consider:
- Privacy by Design: Privacy should be embedded into the design and development of automated systems from the outset. This involves incorporating privacy safeguards into the system architecture, data collection, processing, and storage practices to minimize the risk of privacy violations.
- Data Minimization: Collect and retain only the minimum amount of data necessary for the system’s intended purpose. Avoid indiscriminate or excessive data collection to reduce the potential for privacy breaches.
- Anonymization and De-identification: Implement robust techniques to anonymize or de-identify personal data whenever possible. This helps protect individuals’ privacy by preventing the identification of specific individuals from the collected data.
- Informed Consent: Obtain informed consent from individuals whose data is being collected or processed by the automated system. Provide clear and understandable information about the purpose, scope, and potential risks involved, allowing individuals to make an informed decision about their participation.
- Transparency and Explainability: Automated systems should be transparent in their operations, providing clear information about how data is collected, used, and processed. Individuals should have the right to understand and question the decisions made by automated systems that impact them.
- Algorithmic Fairness and Bias Mitigation: Mitigate bias and ensure fairness in the design and implementation of automated systems. Regularly audit and evaluate algorithms to identify and address any unintended discriminatory effects or biases that could disproportionately impact certain individuals or groups.
- Security Measures: Implement robust security measures to protect the data and systems from unauthorized access, breaches, or malicious attacks. This includes encryption, access controls, regular security assessments, and adherence to best practices in cybersecurity.
- Independent Oversight and Accountability: Establish independent oversight mechanisms to ensure compliance with privacy safeguards and ethical considerations. This can involve regulatory bodies, external audits, or designated ethics committees that monitor and assess the deployment of automated systems.
- Periodic Impact Assessments: Conduct regular privacy impact assessments to evaluate the potential risks and consequences of deploying high-speed automated systems. These assessments should be comprehensive and consider both immediate and long-term impacts on privacy and individual rights.
- Continuous Monitoring and Adaptation: Implement ongoing monitoring and adaptation of automated systems to address emerging privacy concerns and ethical considerations. This may involve updating policies, practices, and technologies to align with evolving standards and societal expectations.
It is important to note that the specific privacy safeguards, ethical considerations, and oversight mechanisms may vary depending on the context, application, and legal requirements of the automated system being deployed.
Fact vs Hyperbole
I asked Chat GPT to “write an algorithm that can discern the difference between fact and hyperbole.” This is the result:
Developing an algorithm that can accurately discern the difference between fact and hyperbole is a challenging task, as it requires understanding the context, tone, and language nuances of a statement. While it is difficult to create a foolproof algorithm for this purpose, I can provide you with a basic approach that can serve as a starting point. Keep in mind that this algorithm may not be perfect and would require further refinement and training with real data to improve its accuracy. Here’s a possible algorithm outline:
- Gather a dataset: Collect a dataset containing labeled examples of statements that are either facts or hyperbole. You’ll need a significant number of examples to train and validate your algorithm effectively.
- Preprocess the data: Clean and preprocess the text data by removing punctuation, converting to lowercase, and handling any other necessary text normalization steps.
- Feature extraction: Extract relevant features from the text data that can help distinguish between facts and hyperbole. Some possible features to consider are:
- Presence of exaggeration indicators (e.g., “never,” “always,” “everyone”)
- Intensity of adjectives and adverbs
- Contextual clues (e.g., sarcastic or ironic language, informal phrases)
- Train a classifier: Select an appropriate machine learning algorithm (e.g., Naive Bayes, Support Vector Machines, or neural networks) and train a classifier using your labeled dataset. Split the dataset into training and testing sets to evaluate the algorithm’s performance.
- Evaluate the classifier: Measure the performance of your classifier using appropriate evaluation metrics such as accuracy, precision, recall, and F1-score. Adjust the algorithm and features as needed to improve its performance.
- Incorporate context and domain knowledge: Consider incorporating additional context and domain-specific knowledge to enhance the algorithm’s understanding. For example, if you’re working with a specific type of text (e.g., news articles, social media posts), you can leverage specific language patterns and knowledge associated with that domain.
- Continuous improvement: Regularly update and refine your algorithm by incorporating user feedback and new data. This iterative process helps improve its accuracy and adaptability over time.
Remember that developing an algorithm to discern fact from hyperbole is a challenging problem due to the inherent subjectivity and context-dependency of language. The outlined algorithm provides a starting point, but further research, experimentation, and tuning will be required to achieve higher accuracy.
Humans Being Human
We love our tools. Especially the ones that automate various tasks. So here we are learning as we go. I hope we embrace taking better care of each other and all our Earthly relations. Go in peace. Live long and prosper.





