
Encouragement Of An Evidence-Based Approach To AI
Most of us don’t know how electrons, transformers, and grounded lines work down to the last detail, but we all know the basics: When we put something into an outlet, it gives our gadgets, tools, etc., power. We also all know the basics of how to stay safe around electricity. We keep tools and hands away from outlets and don’t let electric devices or lines touch the water. Even though we probably learned more about these rules in science class, they are “common sense” about electricity that most of us learn before school.
AI common sense would include a basic understanding of AI’s functions and risks, especially as AI’s powers grow. Conversations about machine learning, neural networks, and big language models can be confusing. Still, people don’t need to know these terms to understand how AI affects their daily lives, including how it could be dangerous.
As AI becomes more regular in our lives, here are some ways we can teach it to be smarter:
Taking into account human nature and AI
In today’s fast-paced tech world, it’s easy to be pulled away by the possibilities of AI. But we have to remember that AI systems are made by humans, which means they can have the same biases and limits as people. These biases can show up in the data used to teach AI, which could lead to unfair treatment or discrimination. For example, if AI systems used in hiring are taught on skewed data, they may favor certain groups over others.
Even though learned bias is common in AI, it is not impossible to fix. Responsible developers and innovators are working to reduce inequity in AI systems by approaching the problem from all angles:
- Models for training using data that is comprehensive, all-encompassing, and diversified
- Testing models for different effects on different groups and checking them for drift over time.
- Implementing skills-based “blind hiring” for development teams.
It brings together humans and technology to create a system of checks and equalization that can override unintentional prejudices.
Even though these steps are being taken to reduce bias, it is still important to recognize that AI systems may not always make the best decisions. This will help AI develop common sense and help users understand the risks and mistakes that may happen.
Combating machine bias
Automation bias happens when people believe automatic systems, like AI, more than their judgment, even when the system is wrong. People often think that computers don’t make stupid mistakes like people do. We tend to believe a computer’s words because it is a neutral machine. But AI tools do a lot more than just add and take away. AI experts would say that adding and subtracting is built on rules, while AI is more about making predictions. Even though the difference seems small, it’s important because it makes AI more likely to repeat biases from old data, make false links, or “hallucinate” information that doesn’t exist but makes sense to a reader.
Putting too much faith in AI can lead to bad things. In health care, a doctor might use an AI system to diagnose, even if information goes against what the AI says. By pointing out this bias, we can get people to question AI systems and look for other points of view, making it less likely that bad things will happen. The “explainability” capabilities of certain reliable AI systems may assist in addressing this issue by offering extra justifications and context for why an AI model achieved its results.
Getting people to think critically
People can learn more about how AI technologies affect the real world if encouraged to ask questions and be curious. To promote AI common sense, we must improve our ability to think critically and keep a healthy level of doubt about AI systems.
This means that AI-generated results should be questioned, limits in the data should be understood, and algorithms’ flaws should be considered. “Trust, but verify” should be how AI interacts with humans until it has been shown repeatedly that it is accurate and successful, especially in high-risk situations.
Dall-E and Midjourney
This way of intelligence can help people make better decisions and understand where AI systems fall short. For example, people who use AI-generated news should be aware that the information they get could be wrong or misleading and should check reports from multiple sources. With programs like Dall-E and Midjourney already able to make photorealistic images that are nearly indistinguishable from reality, we should all be suspicious of controversial or upsetting pictures until we can confirm their truth with corroborating evidence, such as consistent pictures from different angles and reliable first-person reporting.
By knowing the human side of AI, fighting machine bias, and encouraging critical thinking, we can give people the gadget they need to make better choices and responsibly contribute to the development and use of AI technologies.
You don’t need to know how electricity works to avoid a broken power line. We should try to get AI to have the same amount of understanding. By doing this, we can ensure AI works for the greater good, causes less harm, and stays a good thing in our lives.