Exciting News! 🎉 Nordic.Partners has strategically acquired Affcollect

When to Integrate AI (and When to Avoid It)

Dec, 24 2025

Lessons I Learned After Joining Addis Software

When I first joined Addis Software, my instinct was simple: “Can AI solve this problem?”. I often defaulted to thinking in terms of models, predictions, and automation. Intelligence felt like the magic ingredient that could make systems smarter, faster, and more flexible.

After getting involved in two different teams here, I’ve realized that adding AI isn’t always the right solution. Sometimes, simpler approaches are safer, more reliable, and easier to maintain. Here’s what I’ve learned.

What We Mean by AI

For this discussion, AI refers to systems that learn patterns from data and make predictions or decisions without being explicitly programmed for every scenario. This includes:

  • Machine learning models
  • Neural networks and deep learning
  • Natural language processing (NLP) systems
  • Computer vision models

AI shines in situations with uncertain outcomes, large-scale data, or patterns that are hard to define manually. But not every problem that looks like an AI problem truly needs it. Let’s look at some of my personal experiences:

Lessons From Real Projects

1. Avoid AI as a Quick Fix for Fragile Systems

During onboarding, I worked with a team that gathered financial data via web scraping. Because we didn’t control these websites, parsers occasionally broke whenever a site updated. At first, I thought we could use AI to automate scraper updates. The idea was to feed the old scraper and the previous site version alongside the new version into an AI system, which would then generate a scraper adapted to the changes, temporarily reducing the need for manual fixes.

After careful consideration, with the system dealing primarily with financial data, I realized this approach would introduce more risks than benefits:

  • External AI services could raise security, compliance, and reliability concerns.
  • Running AI internally would increase infrastructure and maintenance costs.

Lesson: In sensitive production systems, a simple, clear failure mode with a well-defined fix is often safer than an “intelligent” workaround.

2. Complex Doesn’t Always Mean AI

While studying the domain guidelines needed for a product we were working on, I came across models used to simulate physical systems over long periods. I initially assumed they were machine learning models, but to my surprise, they were grounded in physical laws, equations, and domain expertise.

These models were:

  • Interpretable and explainable
  • Stable and predictable
  • Validated against established knowledge

Introducing AI here would have reduced transparency and trust without improving outcomes.

Lesson: Complexity doesn’t always require AI; sometimes, domain knowledge and deterministic models are better.

Problems That Often Don’t Need AI

Rule-Based or Deterministic Systems

  • Scientific simulations, financial calculations, or regulated systems
  • Transparent, easy to test, and easier to maintain than learned models

Optimization and Search Problems

  • Scheduling, timetabling, resource allocation
  • Traditional algorithms or heuristics are often more reliable than AI

Systems with Limited or Unreliable Data

  • AI models depend on quality data; limited or noisy data can make them brittle
  • Simple logic-based solutions provide more stability early on

 

Hidden Costs of AI

AI is not free of trade-offs. Some hidden costs include:

  • Increased System Complexity: Data pipelines, model training, monitoring, and retraining add operational overhead.
  • Explainability Challenges: Black-box AI models can be hard to trust in critical systems. Explainable AI (XAI) can help, but has trade-offs. e.g., it can be computationally expensive or provide only approximate explanations.
  • Long-Term Maintenance: Models degrade over time due to data drift, requiring ongoing updates.

When AI Makes Sense

AI is most useful when:

  • There are no clear rules to follow
  • Large volumes of data are available
  • Patterns are difficult to define manually
  • The problem is predictive or classificatory

Examples include: speech recognition, image or video analysis, and recommendation systems.

Asking Better Questions

The biggest lesson I’ve learned is that the right question isn’t “Can we use AI?” Instead, ask:

  • What is the simplest solution that reliably solves the problem?
  • Does AI reduce risk, or does it increase it?
  • Can AI provide value while remaining cost-effective and trustworthy?

Final Thoughts

One of the most surprising lessons at Addis Software is that most of the company’s most reliable and valued products contain little to no AI. Their strength comes from clarity, reliability, and deep problem understanding, not from adding intelligence.

Learning when not to use AI has been just as important as learning how to use it. When AI adds real value, it should be embraced, but when it doesn’t, simpler solutions often lead to safer, more maintainable systems.

Discover More

Unlock new growth opportunities with our technology solutions, designed to accelerate your software development and drive business success.

logo