Decoding the Dangers: The Achilles' Heel of AI's Training Data, Algorithms and Analytics
Updated: Jul 26
Image credit: Shutterstock
Observing how analytics evolve and transform businesses' operations and decisions is fascinating. Yet, it comes with an element of fear. I'm a huge technology buff, and AI has been an excellent brainstorming partner, but I get the feeling we are all in a lab experiment with no control group or standards of ethics.
I don't think the world is full of villains steepling their hands hastily in a sinister plot to manipulate and control others with AI. In fact, I'm a fan of AI analytics; they have become powerful tools since the 1990s. Over the past decade, tremendous advancements in AI-powered analytics have allowed us to identify patterns and correlations in real-time across vast data sets, including healthcare diagnostics, fraud detection, predictive maintenance, supply chain optimization, etc. Compared to traditional methods, AI is not only more sophisticated; it's faster, more accessible and more user-friendly than most of us ever imagined. We should remember that AI's power is now available to everyone, driving our thinking, discussions and decisions.
On that thought, I wondered if companies in the private sector consider their own human biases that are fed into the underlying data models. What about governments? Are they being diligent in the development and implementation of AI? Is there any oversight?
Well, yes and no...
While the Consumer Financial Protection Bureau, the Justice Department's Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are doing their best to protect our rights, they have several limitations.
It's difficult to blame them for the first limitation, which is the lack of resources and expertise to keep up with the latest developments and emerging challenges, given AI's rapid pace of evolution.
One of the most significant issues is these agencies' lack of diversity and representation within their own walls.
Is it even possible to identify all the biases in the regulations and enforcement necessary to ensure that AI is being developed bias-free and used fairly with a homogenous group of decision-makers calling all the AI shots? You decide.
As long as agencies are scrambling to keep up with their own data breaches and low diversity numbers, efforts to strengthen these weaknesses will be muted, and AI biases will be present at every turn.
The FTC has published guidance on best practices for companies using AI and has taken enforcement action against a handful of companies that engaged in discriminatory practices. Nonetheless, these agencies do not have enough resources, expertise, or people of color in decision-making positions that can effectively promote diversity in data representation and address regulatory gaps in the legal framework.
Beyond the Workplace. How AI Analytics Could Affect Our Lives, Nation, and World
Given the rise of AI, many people are worried about their jobs. However, the broader implications of AI analytics go far beyond the workplace. Think government. Think global. What will happen once the powers begin to rely heavily on AI to make decisions that overtly impact your life and the lives of your children, parents and everyone else? Hear me out.
Again, I love AI; but the FTC and DOJ cannot catch everyone. They are not penalizing companies for lack of diversity during AI development, lack of diverse data, or lack of diverse supervised AI training - even though that results in biased AI analytics, which will, by default, produce biased business decisions.
The Dark Side of AI Analytics. How Biases Can Perpetuate Inequalities and Injustices
Have we fully considered the ramifications of AI? Are its creators and trainers thoroughly analyzing the potential positive and negative consequences? Have the government and civic officials anticipated and taken steps to address the potential impact on our nation and its relationships with allies, adversaries and nation-states? Shouldn't we be proactive rather than reactive with the tools at our disposal? More awareness is crucial. All AI users should have a basic understanding of the potential biases AI can present.
If AI analytics are biased or incomplete (and basic math of racial inequity and employment disparities shows us they are), then the AI model will produce biased results. Following this logic, AI-based government decisions will undoubtedly affect our population, potentially perpetuating and amplifying existing inequalities and injustices.
Are We Letting AI Analytics Run Rampant Without Considering the Implications?
Imagine if the decisions that determine whether you receive adequate healthcare or not are made by biased AI. It's scary! We should ensure that AI is built and trained on diverse and representative data to prevent new health disparities and avoid exacerbating existing ones.
If we want to make smarter decisions, AI designers, distributors, sellers, and public and private-facing entities need to take responsibility and be proactive, not only in hiring diverse AI, ML (machine learning), robotics, and NLP (natural language processing) researchers, scientists, developers, programmers, engineers and architects, but in training AI to audit itself. For example, the National Institute of Standards and Technology (NIST) could help enforce a regulation by developing AI frameworks and standards for assessing and mitigating risks. AI could be programmed to flag and report potential drawbacks or adverse outcomes automatically, but it needs representative data to do so. The consequences of biased AI could impact our careers, well-being, and livelihoods. A certain level of responsibility falls on all of us to build a future where the power of AI is wielded with wisdom, compassion, equity, and humanity.
"Preventing the harmful effects of biased AI is not just a question of making smart choices, but of demonstrating our collective responsibility to treat all individuals with dignity and respect." - Christine Alexy
Human Intelligence vs. Machine Intelligence: AI Analytic Ethics
Filtering out biases (down to the language level) should be the top priority (perhaps even federal law) and an ongoing initiative for safeguarding humans and evolving human intelligence over machine and technological advancement. Why? Once more, inaccurate and misleading analytics can have serious consequences, from financial losses and legal liabilities to societal safety. A staggering 67 percent of all artificial intelligence specialists are White, Hispanics or Latinos make up a mere 11.3 percent, and Black or African Americans only account for 10.2 percent, which is deeply troubling.
The lack of diversity among AI specialists highlights the urgent need to immediately address the systemic biases perpetuating such inequality because they are currently being injected into AI's training data and algorithms. It's crucial to recognize that these biases are not just a matter of diversity but also have significant consequences for how AI is used and who benefits from it in the long run.
"AI should be piloted and tested by the people . . . not on the people." - Christine Alexy
Over 1,000 new AI apps popped up in the last few weeks. The accelerating rate at which new AIs appear is like dandelions in the spring! As colorful and refreshing as it is, AI does not come with mandatory warnings, training, or ethics awareness that inform people of the lines of humanity being crossed, those that should not be crossed, or those that continue being crossed as we enter the metaverse.
We should be cautious about relying on AI analytics to make decisions for us, as currently, there is a more than a 50 percent chance it was trained with implicit biases. Instead of using it blindly, we should treat AI as a tool for crunching big data, not an unquestioned replacement for human moral decency.
As mentioned in my previous article, we need to drive performance with a human touch; AI should be piloted and tested by the people - not on the people.
What are your thoughts?