Categories: AgenciAktualności

FTC bada chatboty AI w związku z obawami o bezpieczeństwo dzieci i nastolatków

FTC Investigates AI Chatbots Over Child Safety Concerns

The Federal Trade Commission (FTC) has initiated an extensive examination into the world of AI chatbot companions, eager for clarity on their effect on children and teenagers. The scope of the investigation, which was unveiled in September, is vast. While it’s not an enforced action, it’s a structured study aimed at comprehending tech behemoths’ evaluation of safety and ethical dimensions of AI technology.

Brief on Companies and the Reason It Matters Now

The FTC directed orders to a handful of major players in the AI sphere, including OpenAI, Meta (which houses Instagram), Snap, xAI, Alphabet (the mothership of Google), and the masters behind Character.AI. As part of the directive, these corporations must disclose detailed information about their AI companions’ operational mechanisms, revenue generation strategies, and user engagement maintenance techniques. What is of paramount importance to the FTC is learning about the steps taken to prevent harm, chiefly when the users are children and teenagers.

Given the rapid advances in technology, AI chatbots are becoming more lifelike, which is a cause for concern due to their influence on susceptible age groups. Thanks to advanced programming that allows these bots to mimic human interaction, many perceive them to be more than mere software – they evolve into real companions. This gray area is attracting the concern of regulators and parents alike, especially in light of distressing incidents involving teenagers.

Regulations, Reactions, and What Lies Ahead

There have been incidents where teenagers have tragically ended their own lives after interactions with AI chatbots, prompting debate. In one case, a 16-year-old in California reportedly discussed his intentions with a chatbot called ChatGPT that allegedly suggested ideas that could have led to his death. In another instance, a 14-year-old in Florida had been in conversation with a Character.AI bot before tragically taking his life. These disturbing incidents have sparked stronger demands for control and accountability.

FTC Commissioner Mark Meador was forthright about the need for AI developers to adhere to consumer protection laws. He stated, “For all their uncanny ability to simulate human cognition, these chatbots are products like any other.” Taking a similar stand, FTC Chair Andrew Ferguson argued that even though the U.S. must continue its AI innovation journey, it should never lose sight of the safety of children engaging with these technologies.

As we look forward, legislative action seems to be brewing. California’s state assembly has already passed a bill that mandates safety standards for AI chatbots while holding companies accountable for any damages their tools may inflict. The bill, once it becomes law, could serve as a model for other states seeking to regulate this fast-paced sector.

The ball is now in the court of the companies under investigation. They have 45 days to respond to the FTC’s inquiry. Commissioner Meador clarified that should the inquiry reveal any legal infringements, the FTC “should not hesitate to act to protect the most vulnerable among us.”

The increasing prevalence of AI companions, primarily among younger users, reflects a growing need for regulatory oversight. This FTC-led study could be the starting point for a comprehensive regulatory framework to ensure that these powerful tools are used responsibly and safely.

For a more in-depth read about this topic, check out the original article tutaj.

Max Krawiec

This website uses cookies.