StartUpHub

How To Avoid AI Fails? Best Practices for AI Implementation!

сряда, 28 май 2025 10:23

In ?2026, AI marks its 70th anniversary as an academic term—a surprising milestone for a field that remained niche until the 2010s. Since then, AI has become a driving force of innovation, powered by vast data, cloud computing, better hardware, and smarter algorithms—igniting a modern Gold RushBut rapid growth brings risks. The field evolves faster than regulators can react, creating a digital Wild West—data flows replacing shovels and spades. At Sigma Software, we’ve seen this shift up close through a surge in AI project demand.

Most importantly, and why I am writing this – working on various AI projects, we gained valuable insights on AI implementation and governance.

I took some notes, and wanted to share couple of guidelines for anyone building or thinking about building solutions with AI.

There are numerous great use cases

At this point, there are plenty of successful AI use cases, from Amazon saving 4500 developer-years by using AI to perform the transition from Java 7 to Java 19 to Lufthansa in a joint project with Google to optimize flight paths and thus reduce fuel consumption and CO2 emissions.

For instance, Nike has harnessed artificial intelligence (AI) to elevate customer experience through personalized services and product customizationOne standout innovation is the Nike Fit tool, which utilizes computer vision, machine learning, and augmented reality to scan customers’ feet using a smartphone camera, providing precise size recommendations. This addresses a significant issue, as over 60% of people wear the wrong shoe size.

Another cool case to mention is the recent experiment from Nigeria, which explored the use of generative AI as a virtual tutor to enhance student learning. Over six weeks between June and July 2024, students participated in an after-school program focusing on English language skills, AI knowledge, and digital literacy. Students who participated in the program outperformed their peers in all assessed areas. Notably, the improvement in English language skills was substantial, with learning gains equivalent to nearly two years of typical schooling achieved in just six weeks.

One story close to my heart is how AI systems are saving the lives of Ukrainian soldiers and civilians. AI-powered technologies are identifying drones and rockets swarming Ukrainian skies, helping to protect cities and mitigate destruction.

But, not everything in the AI world has gone according to plan.

Remember McDonald’s AI Suggesting 260 McNuggets?

Read more:  There’s Been an Explosion of AI R&D in Greece in the Last 5 Years. What Drives It?

In June 2024, McDonald’s ended its trial of AI-powered drive-thru systems developed in partnership with IBM. Launched in 2021, the project aimed to improve order accuracy and speed. However, the system struggled with understanding diverse accents and dialects, leading to frequent errors — including one infamous incident where it suggested an order of 260 McNuggets. Ultimately, McDonald’s discontinued the project by July 26, 2024. Despite this setback, the company remains open to exploring future voice-ordering solutions and continues its partnership with Google Cloud to integrate generative AI into other business areas.

In April 2024, New York City’s AI-powered chatbot, designed to help small business owners, came under fire for providing inaccurate and even unlawful guidance. Instead of simplifying access to essential information, the chatbot suggested solutions that violated local regulations, potentially jeopardizing businesses. This highlights the risks of deploying AI without rigorous testing, as such errors can have serious consequences.

One AI system even screened out job applicants based on age. For instance, AI hiring platforms might target job ads to younger audiences or prioritize resumes with keywords like “junior” or “recent graduate,” unintentionally excluding older, qualified candidates. Ironically, Plato might not even pass the initial screening these days (he carried teaching in his academy until death).

In 2020, a UK-based makeup artist who had been furloughed was asked to reapply for her position. Despite excelling in skills evaluations, she was rejected due to low scores from an AI tool that assessed her body language. This case illustrates how AI systems, especially those trained on non-diverse data, can perpetuate existing biases.

These failures are stark reminders that AI still has a long way to go. They underscore the importance of responsible implementation, ensuring that AI systems are ethical and reliable.

EU AI Act: Risk-Based Approach

You wouldn’t let your kids drive without a license—so why let teams use AI without guidance? Yet many do, exposing private data and code to security risks. We call this Shadow AI.

To address this, the EU introduced the AI Act. Any company using AI for EU data or decision-making must now comply.

The law bans “unacceptable risk” systems like social scoring and biometric categorization. High-risk AI—used in areas like healthcare, justice, or infrastructure—must meet strict requirements, including risk mitigation, transparency, and human oversight. Limited-risk systems like chatbots need clear labeling; low-risk tools face minimal rules. Think of it as AI parenting: high-risk kids need strict boundaries.

Галерия снимки от How To Avoid AI Fails? Best Practices for AI Implementation! ...