BLOG@CACM
Artificial Intelligence and Machine Learning

They Can Include AI, But Should They?

Teaching students about sensible solutions in the age of AI hype.

Posted
colorful brainstorming concept

I’ve come to believe the most valuable skill we can teach in technology education isn’t how to implement something. It’s how to decide whether something should be implemented at all. That question is especially urgent in the age of AI. We’ve trained our students to use tools like JIRA, write user stories, and diagram processes. But this kind of analytical reasoning—asking “Is this the right problem?” or “Does this solution make sense here?”—isn’t easily taught through lectures or textbooks or even abstract cases. It must be experienced and evolved through practice.

Nowhere is this gap more visible than in Systems Analysis and Design courses. These classes often focus on documentation, not judgment. They focus on what to build, not whether we should. As AI hype floods our classrooms and boardrooms, we risk producing graduates who can specify requirements for machine learning features but can’t explain why AI adds value to the problem they’re solving.

Can We Teach What Supposedly Cannot Be Taught?

Over the past three years, I redesigned my undergraduate Systems Analysis course across three cohorts:

• In 2023, students completed a chatbot simulation with fixed requirements.

• In 2024, they worked on a fictional banking app with modest ambiguity.

• In 2025, they partnered with real clients–student entrepreneurs–on three different projects. Two projects had AI already built into the concept, while one was “AI-agnostic,” requiring students to determine if AI belonged at all.

The contrast between these AI approaches proved important. Students working on the AI-agnostic project had to start from first principles: Does AI solve an actual problem here? Those with AI-embedded projects still had uncertainty, but of a different kind: What specific AI functionality made sense, and how should it be implemented?

Each week, students posted short reflections on what they were learning. These microblogs, along with their project artifacts, offered a window into how their thinking evolved.

The difference was striking. The 2025 students faced with real clients and genuine uncertainty didn’t just complete assignments. They framed business problems. They questioned assumptions. They made decisions that resembled real-world consulting.

One student wrote:

“The moment Business Analysis clicked was during our first meeting with the client. We weren’t there to take notes. We were there to understand the business context and propose solutions that could actually help.”

That wasn’t just a shift in skill. It was a shift in identity.

Why Uncertainty Mattered

The content across the three years was identical. What changed was the structure of the problem. Specifically, two elements had the greatest impact:

1. Structured Uncertainty

In 2025, students working on the AI-agnostic project weren’t told whether AI belonged in their solutions. They had to decide. Even those with AI-embedded projects had to determine which specific applications made sense. They had process guidance, but the outcome remained open.

Interestingly, the AI-agnostic project teams showed the most dramatic shifts in their thinking. When students had to justify AI from scratch rather than implement predetermined functionality, they engaged in deeper reasoning about technology fit and business value.

2. Client Accountability

The stakes were real. Students weren’t designing for a grade. They were advising a client who might actually use their recommendation. That created urgency and focus. They wanted to be right for reasons that went beyond school.

Together, these factors created an environment where judgment wasn’t an extra. It was the work.

From Tools to Thinking

What changed was not just what students did, but how they saw their role. Their language shifted. They used fewer certainty terms like “always” and “definitely” and more flexible language like “might” and “depends.” They started using first-person statements like “as an analyst.” They moved from checklist logic to professional reasoning.

One team, working on the AI-agnostic community-building app, reflected:

“We analyzed the non-functional requirements. Performance, scalability, reliability. We concluded that some features warranted AI, others didn’t. We proposed a roadmap that balanced value, cost, and complexity.”

That’s not just competent. It’s thoughtful. And thoughtful is what real analysts need to be.

What Educators Can Do Differently

Systems Analysis education doesn’t need a total overhaul. But it does need to move beyond applying techniques.

Here are five low-cost changes that made a difference in our course:

  • Introduce ambiguity. Don’t decide the tech in advance. Let students determine if AI fits the problem.
  • Bring in real stakeholders. Student entrepreneurs or campus orgs work fine. What matters is that the interaction feels real.
  • Assess reasoning, not just artifacts. Ask: Why this solution? How does it create value?
  • Treat students as advisors. Position them as consultants, not requirement scribes.
  • Use reflection to build identity. Prompt students to articulate what they’re learning, not just about tools, but about themselves.

What They Learned to See

The biggest shift wasn’t in the wireframes students submitted. It was in what they noticed.

At the start of the semester, one student described business analysis as “gathering requirements and documenting them correctly.” By the end, they wrote:

“We’re not just documenting what clients say. We’re interpreting what they mean, identifying needs they haven’t voiced, and making judgment calls about what technologies actually fit.”

That’s the shift we should aim for. From task completion to judgment. From execution to reasoning. And that’s the kind of thinking we need in a world where AI systems are easy to build but hard to justify.

As educators, we must help students learn not just to build, but to decide why something should be built at all.

Shawn Ogunseye

Shawn Ogunseye is Assistant Professor of Computer Information Systems at Bentley University, Waltham, MA. His work sits at the intersection of enterprise systems architecture, AI strategy, and data governance—where the hardest choices shape enduring advantage.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More