eduba Prepared by Eduba for Babson College — Emerge Americas 2026

eMerge Americas 2026 / Booth 1047

A read on where AI fits in Babson's course architecture, and where it does not.

A one-page briefing for the Generator and Metropoulos Institute team at Babson. Written after reading the CIO 100 writeup and the AI 2.0 Plan, not before.

For
Faculty cohort, Specialty Lab, or Babson Miami module owners
From
Matt Creamer, CRO, Eduba
Method
Interpretable Context Methodology (ICM), submitted to ACM TiiS
01 / The read

What Babson has already done, and what comes after the training.

Babson has done what most schools have not. The Babson IT team delivered Copilot, MathBot, and a growing library of custom agents across the campus. The Generator has peer-trained 80% of faculty and runs eight Specialty Labs. The CIO 100 Award is the external proof.

The next mile is harder. Faculty still need a repeatable way to redesign a course around AI-aware work, not a tool list. The AI 2.0 Plan names course co-creation and multimodal work as the goal. The faculty playbook for getting there is the piece that is still being written. That is the problem this page is about.

Signals in view

  • Jul 2025AI 2.0 Plan extends the EduAI Revolution into content co-creation, agents, and multimodal work.
  • Fall 2024C. Dean Metropoulos Institute for Technology and Entrepreneurship launched.
  • 2025Babson IT earns the CIO 100 Award for the GenAI rollout.
  • OngoingEight Specialty Labs running inside The Generator, each with its own prompts, context files, and rubrics.
02 / The frame

A first pass at a Babson course, broken into the right layers.

Most teams that adopt AI start by putting LLMs where rule-based logic or traditional code would do the job faster, cheaper, and with better guarantees. A reasonable first read across a business school course looks closer to this.

60%

Traditional materials

Readings, cases, problem sets, cohort discussion. The core of a Babson course does not need an LLM wrapper. It needs clear scope and good prompts for the human work.

30%

Rule-based assessment

Structured feedback, rubric-driven grading, reproducible checks. Deterministic logic, not generative output. Faster for faculty and more defensible for students.

10%

Genuine AI work

Simulation, co-creation, multimodal synthesis. The AI 2.0 Plan moments, where a generative model actually earns its place in the syllabus.

The courses that succeed first are the ones where those layers are explicit in the syllabus design. That is the kind of read Eduba does in a single session: walk through one course, one lab, or one certificate module, and say where each piece belongs.

03 / The case and the method

Peer-reviewed, reproducible, already running inside a university.

At the University of Edinburgh, Eduba's founder Jake Van Clief published the Interpretable Context Methodology for exactly this problem. It treats agent context as a layered folder structure (L0 identity through L4 working artifacts), reproducible across courses and across labs. It ran inside a 52-member practitioner community, and it produced a measurable U-shaped edit pattern that told the research team where faculty actually spent their redesign time. That is the method we would apply to a Babson Specialty Lab or a single faculty cohort.

Academic collaboration

Edinburgh & UKICER

MSc Future Governance research placed inside a working university, with UK and Ireland Computing Education Research (UKICER). Published methodology, 52-member practitioner community, reproducible edit pattern observed across faculty.

The paper

Interpretable Context Methodology: Folder Structure as Agent Architecture

Submitted to ACM TiiS. MIT licensed. Intended to be read, adopted, and improved on by other researchers and faculty.

04 / Track record

Work already in the field.

1,500+

Enterprise learners trained since May 2025, including Pacific Life and Colgate-Palmolive through Correlation One. 6,000 to 9,000 hours saved per year. 95% of participants still using the tools 30 days after the workshop.

40+

Executives trained at KPMG UK, one of the Big Four. Regulated-industry audience at leadership level, the same shape of audience as the Metropoulos Institute's executive programs.

Published

ICM submitted to ACM TiiS. Ethics Engine on arXiv: a psychometric assessment tool for evaluating ideological and moral patterns in LLMs. arXiv 2510.11742 / github.com/RinDig/AuditEngine.

05 / Notes on scope

Where Eduba leads, and where it hands off.

Jake and Matt also built an online community to 22,000 members in five weeks, which is how we know what distribution looks like when a methodology actually lands. That is context, not a pitch.

If Babson decides to build a shared internal context-management platform that serves all eight Specialty Labs with versioning, access control, and integration into the Microsoft 365 tenant, that is production engineering work.

Partnership note Eduba partners with NLP Logix for work that sits below the orchestration layer. NLP Logix has been in machine learning since 2011 and runs over 150 data scientists. The booth conversation stays on the Eduba methodology layer unless a production build becomes the right next step.

06 / Next step

Thirty minutes on one course, one lab, or one Miami module.

Pick the course, the Specialty Lab, or the Babson Miami certificate module that is closest to the AI 2.0 work right now. Bring the current syllabus or the current context file set. We will do a live orchestration audit on the call and leave you with a one-page redesign sketch you can hand to the faculty lead.

No slide deck. No demo. A working read on where each piece of the course belongs: traditional materials, rule-based assessment, or genuine AI work. If the fit is real, a scoped engagement comes next.

Book the orchestration audit
With Matt Creamer, Chief Revenue Officer, Eduba. calendly.com/thecro-eduba/30min