Learn about AI and NoCode. Curated news, tutorials and interviews every week. Deep dives on how to build with AI and NoCode tools.
Share
π₯ How to reduce AI hallucinations + new drops from Google & OpenAI
Published 4 days agoΒ β’Β 4 min read
Create something today with AI & Visual Development β¨
Welcome back, Creators! π
We've got +537 new builders joining our newsletter community this week.
This week we also finished editing the Create With 2025 recap video, it's packed with soundbites that will get you excited to explore building with AI.
Don't forget all the talks are available for free on our YouTube channel.
Pour yourselves a cuppa and let's jump in...
π Preview: In today's issue
π€ Why does AI hallucinate and what can we do about it?
πΊ Building AI Agents with Bubble.io
π A 6 step guide to create better AI images
π€ Explain Like I'm 5 β
Hallucinations
If you've used AI at all, you've almost certainly (knowingly or not) have experienced an AI hallucination - when it produces an answer that sounds confident but isn't true.
The problem is, it can be so convincing it's very difficult to spot unless you're already a subject-matter expert.
For example:
Ask it for the author of a book that doesnβt exist β it might invent one.
Ask it for a court case β it might cite a made-up case that looks real.
Hallucinations happen because LLMs are language machines, not fact machines. Theyβre designed to predict the most likely next word, not to check whether that word is correct.
Think of it like autocomplete on steroids: if you start typing "Once upon aβ¦", your phone will suggest "time." But if you start a sentence that's commonly written incorrectly, it may give you a bad prediction.
AI prioritises the most likely choice, not the correct one.
π§ The problem
Training β The LLM is trained on billions of words. It learns patterns in language, not facts.
Prediction β When asked something, it generates the next word with the highest probability of fitting.
Filling gaps β If the model hasnβt βseenβ the answer before, it doesnβt say βI donβt know.β It improvises.
Confidence illusion β Because the output is fluent and polished, we mistake it for fact.
β οΈ Why it happens
It's pattern-matching, not fact-checking. The model has no built-in way to verify accuracy (and no incentive to tell the truth), and people expect answers, not silence.
π‘ What you can do
Hallucinations arenβt going away completelyβbut you can manage them:
Verify critical outputs β Always double-check facts, numbers, and citations.
Ground the model β Connect it to trusted data sources (databases, APIs, search) so it pulls from reality instead of guessing.
Add rules and guardrails β Instruct the AI not to answer if unsure, or to show sources.
Human-in-the-loop β For high-stakes work (law, medicine, finance), keep an expert reviewing the output.
Match task to risk β Use LLMs for brainstorming, drafting, and summarising, but not as the single source of truth.
The takeaway: treat LLMs as creative collaborators, not final authorities. They can generate great first drafts, ideas, and insights, but responsibility for accuracy still lies with you.
Recently we've been updating the Create With website which is built on Bubble. It's one of the most powerful visual development tools out there.
This is a fascinating workshop where Tom Wesolowski shows us how to build AI agents right inside Bubble - no extra platforms needed!
Tom breaks down everything from scratch, showing us the difference between regular chatbots and true AI agents while walking through the OpenAI API setup. You'll see exactly how to make your Bubble app handle function calls, remember conversations, and execute actions like web searches and sending emails.
Tom doesn't just talk theory - he builds the whole thing live, showing all the workflows and tricks (including some awesome Bubble hacks for creating loops!). Whether you want to soup up your existing Bubble apps with AI powers or build a dedicated agent from scratch, this workshop has you covered.
π Why it matters: Software is easier to build than ever, so the real value is shifting to trust and brand. People are choosing products from founders they follow and trust, rather than faceless companies. If youβre building something, sharing your story genuinely matters.
π Why it matters: You can now branch off from any point in a ChatGPT conversation, letting you explore different ideas without losing your original chat. This is handy for testing new prompts or following side topics, especially if you use AI for brainstorming or research.
π Why it matters: If you use AI for image creation, consider these basics: subject, context, style, modifiers, lighting, and camera control. Getting these right helps produce clearer, more professional resultsβeven if youβre not a photographer. Here's a guide to Nano Banana.
π€ Extra byte
βBubble.io made some upgrades to their platform such as a faster database and queries (90% faster) and a public experts program if you need help
Member Office Hours
Talk through your AI, agent and app building questions at our office hours
Join our Create With member office hours calls to talk through your AI questions with our expert coaches.
Last time we talked about getting the most from ChatGPT, using Claude Code, and our experiences with Lovable.
OpenAI CEO Sam Altman said in a recent interview "I think [in the future] there will be a premium on human, in person, fantastic experiences."
Humans will always need humans
We completely agree with Sam. Humans are social animals and IRL interaction is irreplaceable. That's why we're doubling down on in-person events. Find some near you in our events directory!
(Literally, true story, as I write this newsletter on the train there is a man sitting across the aisle who is conversing to ChatGPT like it's his therapist. π¬ ~ Kieran)
With good vibes β¨ β The Create With Team
Did you find this edition of the Create With newsletter useful?
Create something today with AI & Visual Development β¨ Welcome back, Creators! π We've got +332 new builders joining our community this week. Fresh from a week's digital detox, we're back with our learnings. Let's jump in... π Preview: In today's issue π€ What is "Chain-of-Thought reasoning"? πΊ AI Agents 101 with Cien Solon π Lindy.io has cooked up something impressive π€ Explain Like I'm 5 Chain of Thought Reasoning Think of how you solve a complex problem in real life: First you break it into...
Create something today with AI & Visual Development β¨ Hello and Happy Tuesday to the regulars and the 435 new subscribers π Fresh from Mondayβs podcast: weβve distilled the most valuable insights and cut through the noise to bring you the highlights.Let's jump in... π Preview: In today's issue π€ Why giving AI access to tools is a huge unlock πΊ New Podcast: On the ground feedback from GPT-5 π Nano Banana may disrupt Photoshop π€ Explain Like I'm 5 AI Tool Calling: From Chatbot to Agent This...
Create something today with AI & Visual Development β¨ Hello and Happy Tuesday to the regulars and the 262 new subscribers π On our mission to educate as many people in AI, this week's newsletter delves a little deeper into a new trendy way to build apps... via the terminal. As well as James & Kieran sitting down this week to explore it, we've also written up some helpful explainers.Let's jump in... π Preview: In today's issue π€ WTF is the command line? Learn it with a Claude Code Masterclass...