Learn about AI and NoCode. Curated news, tutorials and interviews every week. Deep dives on how to build with AI and NoCode tools.
Share
π₯ How to reduce AI hallucinations + new drops from Google & OpenAI
Published about 2 months agoΒ β’Β 4 min read
Create something today with AI & Visual Development β¨
Welcome back, Creators! π
We've got +537 new builders joining our newsletter community this week.
This week we also finished editing the Create With 2025 recap video, it's packed with soundbites that will get you excited to explore building with AI.
Don't forget all the talks are available for free on our YouTube channel.
Pour yourselves a cuppa and let's jump in...
π Preview: In today's issue
π€ Why does AI hallucinate and what can we do about it?
πΊ Building AI Agents with Bubble.io
π A 6 step guide to create better AI images
π€ Explain Like I'm 5 β
Hallucinations
If you've used AI at all, you've almost certainly (knowingly or not) have experienced an AI hallucination - when it produces an answer that sounds confident but isn't true.
The problem is, it can be so convincing it's very difficult to spot unless you're already a subject-matter expert.
For example:
Ask it for the author of a book that doesnβt exist β it might invent one.
Ask it for a court case β it might cite a made-up case that looks real.
Hallucinations happen because LLMs are language machines, not fact machines. Theyβre designed to predict the most likely next word, not to check whether that word is correct.
Think of it like autocomplete on steroids: if you start typing "Once upon aβ¦", your phone will suggest "time." But if you start a sentence that's commonly written incorrectly, it may give you a bad prediction.
AI prioritises the most likely choice, not the correct one.
π§ The problem
Training β The LLM is trained on billions of words. It learns patterns in language, not facts.
Prediction β When asked something, it generates the next word with the highest probability of fitting.
Filling gaps β If the model hasnβt βseenβ the answer before, it doesnβt say βI donβt know.β It improvises.
Confidence illusion β Because the output is fluent and polished, we mistake it for fact.
β οΈ Why it happens
It's pattern-matching, not fact-checking. The model has no built-in way to verify accuracy (and no incentive to tell the truth), and people expect answers, not silence.
π‘ What you can do
Hallucinations arenβt going away completelyβbut you can manage them:
Verify critical outputs β Always double-check facts, numbers, and citations.
Ground the model β Connect it to trusted data sources (databases, APIs, search) so it pulls from reality instead of guessing.
Add rules and guardrails β Instruct the AI not to answer if unsure, or to show sources.
Human-in-the-loop β For high-stakes work (law, medicine, finance), keep an expert reviewing the output.
Match task to risk β Use LLMs for brainstorming, drafting, and summarising, but not as the single source of truth.
The takeaway: treat LLMs as creative collaborators, not final authorities. They can generate great first drafts, ideas, and insights, but responsibility for accuracy still lies with you.
Recently we've been updating the Create With website which is built on Bubble. It's one of the most powerful visual development tools out there.
This is a fascinating workshop where Tom Wesolowski shows us how to build AI agents right inside Bubble - no extra platforms needed!
Tom breaks down everything from scratch, showing us the difference between regular chatbots and true AI agents while walking through the OpenAI API setup. You'll see exactly how to make your Bubble app handle function calls, remember conversations, and execute actions like web searches and sending emails.
Tom doesn't just talk theory - he builds the whole thing live, showing all the workflows and tricks (including some awesome Bubble hacks for creating loops!). Whether you want to soup up your existing Bubble apps with AI powers or build a dedicated agent from scratch, this workshop has you covered.
π Why it matters: Software is easier to build than ever, so the real value is shifting to trust and brand. People are choosing products from founders they follow and trust, rather than faceless companies. If youβre building something, sharing your story genuinely matters.
π Why it matters: You can now branch off from any point in a ChatGPT conversation, letting you explore different ideas without losing your original chat. This is handy for testing new prompts or following side topics, especially if you use AI for brainstorming or research.
π Why it matters: If you use AI for image creation, consider these basics: subject, context, style, modifiers, lighting, and camera control. Getting these right helps produce clearer, more professional resultsβeven if youβre not a photographer. Here's a guide to Nano Banana.
π€ Extra byte
βBubble.io made some upgrades to their platform such as a faster database and queries (90% faster) and a public experts program if you need help
Member Office Hours
Talk through your AI, agent and app building questions at our office hours
Join our Create With member office hours calls to talk through your AI questions with our expert coaches.
Last time we talked about getting the most from ChatGPT, using Claude Code, and our experiences with Lovable.
OpenAI CEO Sam Altman said in a recent interview "I think [in the future] there will be a premium on human, in person, fantastic experiences."
Humans will always need humans
We completely agree with Sam. Humans are social animals and IRL interaction is irreplaceable. That's why we're doubling down on in-person events. Find some near you in our events directory!
(Literally, true story, as I write this newsletter on the train there is a man sitting across the aisle who is conversing to ChatGPT like it's his therapist. π¬ ~ Kieran)
With good vibes β¨ β The Create With Team
Did you find this edition of the Create With newsletter useful?
Create something today with AI & Visual Development β¨ Hello and Happy Tuesday to the regulars and the 102 new subscribers π As part of our mission to educate as many people as possible about AI, we're closing out our series on building AI Agents but looking at the feature "human in the loop"...plus more.Let's jump in... π Preview: In today's issue π€ How Zapier does Human in the Loop (Episode 6 of AI Agents) πΊ Must watch video on Claude Skills. Very Entertaining π Vibe code with...