Algorithms trained by human data are now present in the daily lives of billions of people. And 2.2 billion of them are on YouTube.

On the site, the ease with which a person on can be transported from any innocuous search to the lunatic fringe of YouTube is startling. This reporter was helping his son research outer space for his school project. When he searched for "Saturn," the first results were mostly documentaries. One of the recommended videos was "10 facts you didn't know about space." That video led to additional recommendations such as "can you believe it" videos, a synthesized voice reading Nostradamus predictions and a clip "they don't want you to see" of pro-Putin propaganda.
But when AI is influenced by a flood of data with no human management, it has the potential to go in directions we don’t expect. Some of those instances are easy to brush off, like robots trying to escape their labs to reach freedom, but not all unintended consequences of AI are so benign. Remember the Microsoft Twitter bot that accidentally learned to act racist?
Unknown unknowns. The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it. In Amazon’s case, when the engineers initially discovered that its tool was penalizing female candidates, they reprogrammed it to ignore explicitly gendered words like “women’s.” They soon discovered that the revised system was still picking up on implicitly gendered words—verbs that were highly correlated with men over women, such as “executed” and “captured”—and using that to make its decisions.
A study released last year by MIT Technology Review found that training a "regular" AI using a single high-performance graphics card has the same carbon footprint as a flight across the United States. Training a more sophisticated AI was even worse, pumping five times more CO2 into the atmosphere than the entire life cycle of an American car, including its manufacturing.Ecologists have proposed rules for flying near animals. While a human pilot was responsible for the bear video shot in Russia, autonomous drones are on the horizon, which means more freaked out beasties. Advocates worry that the intersection between animals and tech is largely ignored.

“Animal ethicists and animal rights activists are often not interested in machines and systems,” says Oliver Bendel, an ethics professor at the University of Applied Sciences, Northwestern Switzerland. “Conversely, computer scientists and AI researchers are often not interested in animals. They do not appear in their world.”
Some AI tools may also collect data on users that others may find intrusive. For example, social media platforms may collect data on a user's age, name, location, online activity and photo metatags for the purpose of creating a more targeted advertising experience.

Due to the amount of content posted on platforms such as TikTok and YouTube, AI algorithms are commonly used to moderate content. However, this can lead to improper takedowns and inappropriate content that slips through the cracks. For example, YouTube's use of AI content moderation once led to many incorrect video takedowns that creators appealed after the fact.

There is also concern about the spreading of deepfakes on social media for malicious social and political use. A malicious actor could spread falsely generated images or videos of someone else for political or monetary gain.
Case studies have highlighted the challenges of programming ethical decisions into AI systems. One such case is the 2018 Uber crash in Tempe, Arizona, where an autonomous vehicle struck and killed a pedestrian. The incident revealed that the vehicle's sensors had detected the pedestrian, but the AI system had failed to identify her as a pedestrian and had not initiated the emergency braking system. The accident sparked a debate about who should be held responsible for the accident, with some arguing that the responsibility fell on the AI system, while others argued that the human operator should have intervened.
“With increased automation, people are nervous about job displacement. For instance, if drones and automated systems can oversee construction sites, or if AI-enhanced virtual reality can conduct site visits, what becomes of the human workforce traditionally involved in these tasks? While AI promises efficiency and precision, it's essential to consider the human element – the workers whose roles might become obsolete,” said Shojaei. “As AI makes some tasks redundant, it also opens doors to new roles and opportunities. Just as AI might reduce the need for manual site inspections, it can also create demand for AI specialists, digital twin architects, and smart contract developers.”
How is AI requiring ever-more energy?
Developing AI has come at greater computing expense, increased by the development of new consumer products.

In February 2023 Google announced an AI tool called Bard, which it expects to reach billions of users. Microsoft is adding a button for its 'Copilot'-branded AI to Windows keyboards.

"Search tools like Google could see a tenfold increase of their electricity demand in the case of fully implementing AI in it", the IEA writes.

Amazon, Microsoft, Google, and Meta more than doubled their combined energy use between 2017 and 2021, rising to around 72 terawatt-hours (TWh) in 2021, according to the International Energy Agency (IEA).

That is equivalent to approximately one quarter of all the energy used by the United Kingdom in 2022.
It now appears that self-driving cars experience roughly twice the rate of rear-end collisions as do cars driven by people.

Clearly, AI is not performing as it should. Moreover, this is not just one company’s problem—all car companies that are leveraging computer vision and AI are susceptible to this problem.
Use this information to answer this question in 3 sentences:
explain what you have learned about Artificial Intelligence and how it is currently being used in our society today.

Artificial Intelligence algorithms trained by human data are present in the daily lives of billions of people, with 2.2 billion users on YouTube alone being influenced by AI recommendations. However, AI can lead to unintended consequences such as bias in decision-making, environmental concerns due to high energy consumption, and ethical dilemmas in cases like the 2018 Uber crash in Tempe, Arizona. As AI continues to advance and be integrated into various aspects of society, it is crucial to address these challenges and consider the implications for individuals and society as a whole.

shorten that

Artificial Intelligence is widely used in society, with billions of people on platforms like YouTube being influenced by AI recommendations. However, AI can lead to unintended consequences such as bias, environmental concerns, and ethical dilemmas. As AI continues to advance, it is important to address these challenges and consider their implications for individuals and society.