Why I Didn’t Want Another “AI Course”
I’ve seen this movie too many times: someone watches a few demos, learns the buzzwords, and suddenly has opinions about AI strategy.
I wanted the opposite.
My 2025–2028 plan was explicit: build foundations that survive hype cycles, then add an executive lens that lets me talk about AI in business terms without losing technical integrity.
That’s why, for me, Berkeley Executive Education wasn’t “endgame.”
It was the moment where the technical foundation met a business framework — where my knowledge stopped being “what I know” and started becoming “what I can do with it.”
The Pre-Berkeley Phase: Earning the Right to Have Opinions
Before Berkeley, I didn’t want “AI enthusiasm.” I wanted literacy and fluency.
Books that shaped my worldview (not just my vocabulary)
Two books mattered in a very specific way:
- Kai-Fu Lee’s AI Superpowers gave me urgency and the geopolitical/business-scale framing.
- Melanie Mitchell’s AI: A Guide for Thinking Humans kept me intellectually honest — forcing skepticism, limits, and clarity about what AI can and cannot do.
That combination built a worldview that I still use: move fast, but don’t hallucinate reality.
Foundations, not vibes
I did what many people skip because it’s not glamorous: I built a broad and usable base before touching any “strategy” talk.
- A major Python refresh, consolidating years of fragmented knowledge into a coherent toolkit — so experimentation and prototyping wouldn’t be blocked by basics.
- A mix of Udemy / online courses to fill gaps quickly and systematically (breadth first, then depth where it mattered).
- The Red Hat AI Foundations Technologist Certificate — one of the few structured programs I took that felt genuinely solid and aligned with real-world constraints.
- Andrew Ng’s Coursera courses (AI for Everyone, Generative AI for Everyone) as an orientation layer: good for executive framing and vocabulary, but not where you build deep intuition.
- A personal learning system built around active recall, structured notes, and friction reduction — because without repetition and structure, “learning” is just consumption.
This is the pre-Berkeley build I wrote down earlier: AI: My Roadmap, My Path, My Thoughts (77 Days Later).
What Berkeley Actually Added: The Executive Lens
Here is the truth that’s easy to miss:
Berkeley didn’t “teach me AI.”
It taught me how to think and speak about AI in an executive context without becoming a buzzword salesman.
The strongest value wasn’t a single lecture. It was the combination of:
- Business language: how to frame AI work in terms of value, trade-offs, risk, adoption, and operating model.
- Structure: a forced cadence, module-level assignments, and iterative learning.
- Peer learning: reading what strong participants produced, not just consuming content.
That last point is underrated. After each module (8 modules over ~10 weeks), we had 1–2 assignments. They weren’t “hard” in the mathematical sense — but they were good forcing functions. And after submitting, you could browse peer submissions.
That’s how you learn from the best.
Peer Learning: My Most Reliable Source of “Aha” Moments
I consistently got disproportionate value from reading one participant’s work: Dian Tjondronegoro.
His assignments had that rare combination: clarity, structure, and a practical perspective. Later I checked who he actually is: a professor and practitioner — which explained everything.
This is a key lesson: in cohort-based learning, the real “hidden curriculum” is the cohort.
Feedback loop: I wasn’t just active — I was auditing the course
I wasn’t just an “active student.” I reviewed the material meticulously, found gaps and inconsistencies, and sent suggestions repeatedly — and Berkeley’s team genuinely listened.
They replied:
“I appreciate how deeply embedded you are in this course… keep bringing them.”
Module-by-Module: Where Things Clicked (and Where They Hurt)
Module 1 — AI and Business (Zsolt Katona): the language shift
This was my first “okay, this is different” moment.
Zsolt Katona impressed me because he didn’t present AI as a tech story.
He presented it as a business story with technical constraints — where the main skill is framing problems, value, and feasibility without lying to yourself.
It also recalibrated how I judge AI initiatives: not by how cool the model is, but by whether the problem deserves AI.
Module 2 — ML Basics (Thomas Y. Lee): descriptive vs predictive vs prescriptive + metrics intuition
This module gave me something I didn’t get anywhere else: a cohesive mental model connecting:
- descriptive analytics (what happened),
- predictive analytics (what will happen),
- prescriptive analytics (what should we do).
And then: evaluation.
The confusion matrix stopped being a diagram and became a decision tool.
Metrics became trade-offs:
- precision vs recall (sensitivity)
- specificity (and why it matters in high-stakes settings)
- accuracy as a sometimes-misleading comfort metric
This is the kind of intuition you can actually use in real projects, especially when stakeholders ask for “high accuracy” without understanding the cost.
Module 3 — Neural Networks & Deep Learning: my 8-hour backprop wrestling match
I came prepared for this. But I still remember the day I spent about 8 hours going through backpropagation step-by-step.
Not because Berkeley made it hard.
Because I refused to hand-wave it.
I worked through it until the visuals matched the math and the mechanism became obvious: gradients, chain rule, parameter updates, and what “learning” actually means under the hood.
That investment paid dividends later: once you truly internalize optimization, you stop treating deep learning like magic and start treating it like engineering.
Module 4 — Computer Vision: solid, classic… and not my main obsession
Computer Vision is foundational and well-established. The module was good.
But it wasn’t where I personally got the biggest leverage. For me, the next part was far more exciting.
NLP: the transition that matters in the real world
NLP was the point where multiple threads connected:
- BoW → TF-IDF
- LSA
- vector space thinking → vector search
- word2vec
We didn’t go deep into transformers, which would be the modern continuation — but the module still did the job: it built a coherent progression of representations and retrieval logic.
This matters because a lot of real enterprise AI today is not “AGI.” It’s retrieval, search, and decision support — and those foundations are the spine of it.
Module 5 — Robotics (Pieter Abbeel): the peak difficulty
They warned us: challenge ramps up exponentially from Module 1 to 5, then decreases.
That was not marketing. That was accurate.
Robotics — and specifically the way Pieter Abbeel presented it — was the most demanding part of the program for me. He’s the kind of speaker where you immediately know: this person is operating at a different level.
This module forced disciplined thinking:
- Markov Decision Process (MDP)
- Bellman optimality equation
- state value / state-action value
- Value Iteration
- V*, Q*, policies, and why “optimal” is a defined object, not a vibe
This was the module where I spent the most time to truly understand the machinery.
And it upgraded my mental model of “AI” from “prediction” to “decision-making under uncertainty.”
Module 6 — AI Strategy (Zsolt Katona): from tools to advantage
This was one of my favorites.
Strategy is where people usually become hand-wavy — and where bad frameworks do damage.
Katona’s approach was refreshingly open and grounded.
It upgraded my focus on value and helped me think in levers rather than tools. It also made it easier to map AI initiatives to real business outcomes without pretending every use case is transformational.
Module 7 — AI and Organizations (Sameer B. Srivastava): the “Game of Thrones” layer
This module was a wake-up call.
Because the hardest part of AI in enterprises is often not the model. It’s:
- incentives
- power structures
- risk posture
- adoption mechanics
- operating model and governance
Sameer Srivastava illuminated that reality in a way that felt uncomfortably accurate: you can build something technically correct and still fail because the organization wasn’t designed to absorb it.
If you want AI to work, you’re not just shipping software. You’re changing how decisions get made.
Module 8 — Future of AI in Business: a clean landing into Capstone
By Module 8, my attention shifted almost entirely to Capstone.
The module served as a useful closure — but the real work product was what came next.
Capstone: Turning Learning into a Credible Plan
My biggest output wasn’t a quiz score. It was the Capstone.
Berkeley pushed me to translate everything into something that has business shape:
- a problem worth solving
- constraints and context
- what success means
- value hypotheses
- risk and governance
- realistic path to pilot → scale
This is where the executive lens becomes real: you stop describing AI and start proposing a plan that can survive contact with stakeholders, budget, and security teams.
The Closing Signal: Berkeley’s Recognition Letter
On November 19, 2025, Berkeley Executive Education issued a completion confirmation letter with explicit recognition for my engagement, feedback, and Capstone quality.
I’m including the content here (lightly cleaned up for formatting):
November 19, 2025
To whom it may concern,
This is to confirm that Olimp Boćkowski has met the completion requirements for the Artificial Intelligence: Business Strategies and Applications Program, running from September 4, 2025 to November 13, 2025, by Berkeley Executive Education, and has been awarded a digital certificate for this program.We recognize Olimp for his steady engagement and commitment throughout the program. He provided detailed, thoughtful feedback that helped strengthen the course, and his Capstone project showed a strong understanding of the material and a clear plan to apply it in practice.
We appreciate Olimp’s effort and are pleased to acknowledge his work.
Sincerely,
Marose Eddy
Director, Partnerships
Berkeley Executive Education, Haas School of Business
I’m deliberately calling it what it is: a recognition and completion confirmation — not a generic participation badge, and not a traditional recommendation letter either.
What I’d Do Differently (If I Started Again)
- I’d protect more time for office hours — the learning facilitators shared fresh, high-signal context there.
- I’d manage my schedule more deliberately to get more of that live interaction. I’d define Capstone “success metrics” earlier — even if they’re imperfect.
What’s Next: Roadmap Continuation
Berkeley was not the finish line. It was the lens upgrade.
The foundation work (books, Python, structured study) gave me technical literacy.
Berkeley gave me the executive framing: how to tie AI to value, constraints, organizations, and adoption.
Now the roadmap continues: fewer “courses,” more work products — systems, prototypes, and writing that demonstrate applied competence rather than consumption.
Because in the end, the real credential is not a certificate.
It’s what you can build, explain, and deliver — in language that both engineers and executives can trust.