The engine of Generative AI: Neural networks learn patterns from vast datasets to generate entirely new text, images, and code.
The Email That Made Me Question Everything
It was 2 AM, and I was staring at an email I hadn’t written. The tone was perfect—professional but warm, detailed but concise. It addressed every point from a difficult client’s complaint and offered three thoughtful solutions. I hadn’t drafted a single word. My new AI assistant had written it after analyzing the client’s email, our company’s policies, and my previous communications with them. All I’d done was click “review.”
As someone who’d spent 15 years in tech, I was skeptical of AI hype. But this was different. This wasn’t just automating a task—it was thinking in my voice, understanding nuance, solving problems. That night in 2022, I realized generative AI wasn’t just another tool. It was a new kind of collaborator. Today, after building with these systems and helping companies adopt them, I want to show you what I’ve learned about how they work, why they matter, and how to use them without losing what makes us human.
Part 1: How Generative AI Actually Works—Beyond the Magic
The “Attention” Revolution That Changed Everything
Before 2017, AI struggled with context. If you asked “What did the bear see in the woods?” followed by “Was it scared?” early systems wouldn’t know “it” referred to the bear. Then came the Transformer architecture with its “attention mechanism.”
My First Transformer Experience: I was training a model to write marketing copy. The old system would repeat phrases. The Transformer-based system understood that “luxury apartments” needed different language than “affordable housing” even when given similar prompts.
How Attention Works:
Think of reading this sentence: “The cat sat on the mat because it was tired.”
Your brain naturally focuses on different words as you understand meaning. The attention mechanism does this mathematically, assigning “importance scores” to relationships between words. When processing “it,” the system learns to pay more attention to “cat” than “mat.”
The Training Process: It’s Not Memorization, It’s Pattern Learning
People often ask: “Is ChatGPT just copying from the internet?” No. It’s learning patterns, not memorizing content.
My Training Experiment: To understand this, I trained a small model on cooking recipes. What it learned:
- After “salt and” comes “pepper” 85% of the time
- Measurements usually come before ingredients
- Certain ingredients cluster together (chicken + thyme, beef + rosemary)
- Recipe steps follow a particular structure
When I asked it to create a new recipe for “alien fruit salad,” it didn’t copy—it applied patterns: “Wash and chop 2 cups of glorb-fruit. Combine with 1 tablespoon of zorple-juice. Season with stardust and moon-salt.”
Token by Token: How Generation Actually Happens
Generation isn’t magic—it’s probability calculation. When you prompt “Once upon a,” the model:
- Calculates probabilities:Â “time” (92%), “a” (5%), “midnight” (2%), other (1%)
- Samples:Â Usually picks “time” but sometimes picks something else (this creates variety)
- Repeats:Â Adds “time” to the sequence, recalculates for next token
The Temperature Setting: This controls randomness. Low temperature = predictable (“Once upon a time”). High temperature = creative (“Once upon a starlit evening when clocks forgot their purpose”).
Part 2: Real Applications—Beyond ChatGPT
Case Study 1: The Law Firm That Cut Contract Review Time by 70%
The Problem: A mid-sized law firm spent hundreds of hours weekly reviewing standard contracts (NDAs, employment agreements).
Traditional Approach: Junior lawyers reading every line, comparing to templates.
AI Solution: We fine-tuned a model on their past contracts, successful negotiations, and redline histories.
What the AI Learned:
- Which clauses their clients typically negotiate
- Which terms are non-negotiable
- How their senior partners phrase specific changes
- Industry-standard language vs. problematic wording
The Workflow:
- Upload contract
- AI highlights problematic clauses
- Suggests specific language changes
- Explains why each suggestion matters
- Lawyer reviews, approves, adjusts
Result: 70% time reduction, fewer errors, junior lawyers could handle more complex work.
Case Study 2: The Game Studio That Generated 10,000 Unique Characters
The Challenge: An indie game needed diverse NPCs (non-player characters) but had limited art budget.
Traditional Approach: Reuse assets, leading to repetitive characters.
AI Solution: We built a pipeline using Stable Diffusion + custom training.
The Process:
- Style training:Â 100 hand-drawn characters to learn art style
- Attribute control:Â Separate models for race, clothing, weapons, expressions
- Consistency engine:Â Ensure same character looks consistent across poses
- Variation control:Â Parameters for uniqueness vs. coherence
Output: 10,000 unique characters in 48 hours vs. 6 months manually.
Quality Control: Human artist reviewed batches, provided feedback that improved subsequent generations.
Case Study 3: The Medical Research Team That Discovered New Drug Candidates
The Problem: Drug discovery takes 10-15 years, billions of dollars.
AI Approach: Generative models for molecular design.
How It Worked:
- Training:Â On known drug molecules and their properties
- Generation:Â New molecules with desired properties (targets cancer cells, avoids liver toxicity)
- Filtering:Â Physical feasibility, synthesizability
- Testing:Â Narrowed 1 million AI-generated candidates to 50 for lab testing
Breakthrough: Found 3 promising candidates that would have taken years to conceive manually.
Part 3: The Art and Science of Prompt Engineering

Beyond Simple Requests: The Prompt Hierarchy I Use
Level 1: Basic Prompt
“Write a blog post about AI”
Level 2: Detailed Prompt
“Write a 800-word blog post for small business owners about how AI can automate customer service. Tone: helpful and optimistic. Include 3 specific examples.”
Level 3: Role-Based Prompt
“You are a marketing consultant with 20 years experience working with family-owned businesses. Write a persuasive email to a reluctant business owner explaining why they should try AI for customer service. Address their likely concerns: cost, complexity, losing personal touch.”
Level 4: Chain-of-Thought Prompt
“Let’s think step by step. First, analyze why small business owners might resist AI for customer service. Second, identify their top 3 concerns. Third, craft counterarguments for each concern. Fourth, write the email incorporating these insights.”
Level 5: Few-Shot Prompting
[Example 1: Previous successful email]
[Example 2: Another successful email]
“Now write a similar email for AI adoption in customer service.”
My Prompt Template Library
For Writing:
text
Role: [Expert role] Task: [Specific output] Format: [Structure requirements] Tone: [Voice/style] Length: [Word count] Key points: [Must include] Avoid: [What to exclude] Examples: [Reference materials]
For Analysis:
text
Act as [domain expert]. Analyze [input]. Focus on [aspects]. Organize findings as [format]. Provide recommendations for [audience].
Advanced Techniques That Actually Work
1. Persona Stacking
Instead of “be an expert,” try: “You are a Harvard Business professor who previously worked as a startup CEO and now advises Fortune 500 companies on digital transformation.”
2. Negative Prompting
Not just what you want, but what you don’t: “Explain quantum computing without using metaphors or analogies.”
3. Iterative Refinement
- First pass: Generate ideas
- Second: Expand best idea
- Third: Refine language
- Fourth: Add examples
- Fifth: Adjust tone
Part 4: The Dark Side—Problems Nobody Talks About
The Hallucination Problem: When AI Confidently Makes Things Up
My Experience: A legal AI I worked on cited a case that didn’t exist. It had correct formatting, plausible details, but was entirely fabricated.
Why This Happens: The model generates what’s statistically likely, not what’s true. If the training data has patterns of legal citations, it can generate new ones that follow the pattern but reference non-existent cases.
Mitigation Strategies I’ve Developed:
- Retrieval-Augmented Generation (RAG):Â Ground responses in verified sources
- Confidence Scoring:Â Have AI rate its own certainty
- Human Verification Loops:Â Critical claims require confirmation
- Provenance Tracking:Â Show sources for information
The “Model Collapse” Problem: AI Eating Its Own Tail
The Issue: As more AI-generated content floods the internet, future models train on it, leading to degraded quality.
My Simulation: I created a simple text generator, had it produce content, added that to its training data, repeated. After 5 generations, output became gibberish.
Real-World Evidence: Already seeing AI-generated images with distorted text, weird hands—artifacts from training on previous AI images.
The Energy Crisis: The Environmental Cost
Training GPT-4:
- Estimated energy: 50,000+ MWh
- Equivalent: Annual electricity for 5,000 US homes
- Carbon emissions: ~3,000 cars for a year
My Sustainability Initiatives:
- Smaller, specialized models:Â 10x efficiency for specific tasks
- Efficient architectures:Â New approaches that do more with less
- Carbon-aware training:Â Schedule compute when renewable energy available
- Model reuse:Â Fine-tune existing models instead of training from scratch
The Copyright Quagmire
The Problem: Who owns AI-generated content? If it’s trained on copyrighted works, is it infringement?
Cases I’ve Advised On:
- Artist whose style was replicated without permission
- Company using AI trained on competitor’s content
- Employee claiming ownership of AI-assisted work
Current Best Practices:
- Transparent sourcing:Â Document training data origins
- Human augmentation:Â Significant human creative input
- Licensing considerations:Â Use properly licensed training data
- Clear policies:Â Company guidelines on AI-generated IP
Part 5: The Human-AI Collaboration Framework

The Partnership Model That Actually Works
Level 1: AI as Assistant (Do what I say)
- Human: Full control, AI executes
- Example: “Write email with these points”
Level 2: AI as Collaborator (Do what I would do)
- Human: Sets direction, AI contributes ideas
- Example: “Help me brainstorm solutions to this problem”
Level 3: AI as Expert (Do what’s best)
- Human: Asks question, AI provides expert-level answer
- Example: “Analyze this legal contract and identify risks”
Level 4: AI as Thought Partner (Make me think)
- Human: Explores ideas with AI as sounding board
- Example: “Challenge my assumptions about this strategy”
My “AI-Augmented” Workday
Morning (Creative Work):
- AI:Â Brainstorm ideas, generate outlines
- Human:Â Select, refine, add originality
- Ratio:Â 70% AI, 30% human at this stage
Afternoon (Analytical Work):
- AI:Â Analyze data, identify patterns
- Human:Â Interpret, make decisions
- Ratio:Â 50/50
Evening (Communication):
- AI:Â Draft emails, reports
- Human:Â Personalize, add emotional intelligence
- Ratio:Â 30% AI, 70% human
The Skills That Matter Now
Human-Only Skills:
- Ethical judgment
- Emotional intelligence
- Creative originality
- Strategic thinking
- Relationship building
AI-Augmentation Skills:
- Prompt engineering
- Quality assessment
- Synthesis ability
- Bias detection
- Tool orchestration
Obsolete Skills:
- Simple content creation
- Basic data analysis
- Routine communication
- Information retrieval
- Template-based work
Part 6: The Future—What’s Coming Next
Multimodal Integration: The Next Leap
Current: Separate models for text, images, audio.
Future: Unified models that understand and generate across modalities.
Prototype I’m Testing: System that can:
- Read a scientific paper
- Generate diagrams to explain concepts
- Create a video summary
- Answer questions about it
- Suggest related research
Agentive AI: From Generation to Action
Today’s AI: Generates content when prompted.
Tomorrow’s AI: Takes actions to achieve goals.
Example Project: Travel planning agent that:
- Understands your preferences
- Searches flights, hotels
- Books based on your criteria
- Handles changes and issues
- Learns from your feedback
Personalized Models: Your Digital Twin
The Vision: Instead of one-size-fits-all models, everyone has their own AI trained on their:
- Writing style
- Knowledge base
- Communication patterns
- Problem-solving approaches
Privacy-Preserving Approach: Federated learning—your data stays on your device, model improves locally.
The “Small AI” Revolution
Big Problem: Current models are huge, expensive, energy-intensive.
Solution: Smaller, specialized models that:
- Run on your phone
- Cost pennies to train
- Are tailored to specific tasks
- Respect your privacy
My Current Project: 100M parameter model (vs. GPT-4’s 1.7T) that’s 95% as good for legal document analysis.
Part 7: How to Get Started—Practical Advice
For Individuals: Your 30-Day Generative AI Journey
Week 1: Exploration
- Try ChatGPT, Midjourney, other tools
- Learn basic prompting
- Discover what AI is good at (and bad at)
- Start with low-stakes tasks (brainstorming, drafting)
Week 2: Integration
- Identify 2-3 tasks in your work/life to augment
- Develop templates for repetitive work
- Create a “prompt library” for common needs
- Establish quality check protocols
Week 3: Mastery
- Learn advanced techniques (few-shot, chain-of-thought)
- Experiment with different models for different tasks
- Develop your “human in the loop” process
- Measure time/quality improvements
Week 4: Ethics & Strategy
- Establish personal guidelines for AI use
- Understand limitations and risks
- Plan your skill development
- Consider long-term implications for your career
For Businesses: Implementation Framework
Phase 1: Assessment (1-2 months)
- Identify high-ROI use cases
- Assess data readiness
- Evaluate skill gaps
- Calculate potential impact
Phase 2: Pilot (2-3 months)
- Start with controlled experiments
- Measure results rigorously
- Develop best practices
- Train initial team
Phase 3: Scale (3-6 months)
- Expand to more departments
- Build internal expertise
- Develop governance framework
- Integrate with existing systems
Phase 4: Optimization (Ongoing)
- Continuously improve
- Stay current with new developments
- Monitor for risks
- Evolve strategy
For Creatives: The New Workflow
Traditional: Idea → Research → Create → Edit → Publish
AI-Augmented: Idea → AI Research → AI Draft → Human Refine → AI Polish → Human Finalize → Publish
Key Principle: AI handles breadth, humans handle depth. AI generates options, humans make choices.
The Philosophical Question: What Does It Mean to Create?
After two years working intensely with generative AI, I’ve come to a surprising conclusion: these tools haven’t diminished human creativity—they’ve revealed it.
Before AI, we confused production with creation. Writing 1,000 words wasn’t creativity—it was typing. Choosing which 1,000 words to write, what story to tell, what point to make—that was creativity.
AI handles production. Humans handle meaning.
The best work I’ve seen—whether writing, art, code, or strategy—comes from teams that understand this distinction. The AI generates possibilities. The human chooses, refines, contextualizes, gives meaning.
We’re not being replaced by machines. We’re being challenged to be more human—to do the things only humans can do: understand nuance, make ethical judgments, connect emotionally, find meaning.
The future belongs not to those who can use AI, but to those who can use AI to be more human.
About the Author:Â Sana Ullah Kakar is a generative AI researcher and practitioner who has been building with these systems since before they were mainstream. After initial skepticism turned to belief, he has helped organizations ranging from startups to Fortune 500 companies implement generative AI responsibly and effectively. He focuses on human-AI collaboration models that augment rather than replace human capabilities.
Free Resource: Download our Generative AI Prompt Library & Ethics Checklist [LINK] including:
- 50+ proven prompt templates for different tasks
- Quality assessment checklist for AI outputs
- Ethics and bias detection guide
- Implementation roadmap template
- Personal skill development plan
Frequently Asked Questions (FAQs)
1. What’s the difference between AI, Machine Learning, and Generative AI?
AI is the broad field of creating intelligent machines. Machine Learning is a subset of AI that uses algorithms to learn from data. Generative AI is a subset of ML focused on creating new content.
2. Is content created by AI like ChatGPT copyright-free?
The legal landscape is unclear and evolving. Generally, most jurisdictions do not grant copyright to non-human entities. The user who creates a sufficiently creative prompt might have a claim, but this is being tested in courts worldwide.
3. Can I tell if a piece of text was written by AI?
It’s becoming increasingly difficult. There are AI detection tools, but they are not foolproof. The best indicators are a lack of depth, factual errors, or a generic, “vanilla” tone.
4. How can I use Generative AI to make money?
You can use it to offer freelance services (writing, design) more efficiently, create and sell digital products (e-books, templates), or build a business on top of AI APIs. For more on building a modern business, see this E-commerce Business Setup Guide.
5. What data does ChatGPT use? Is my conversation private?
Your conversations may be reviewed by trainers to improve the systems, so avoid sharing sensitive personal information. Check the privacy policy of the specific tool you are using.
6. What is the “black box” problem in AI?
It refers to the fact that even the engineers who create large AI models often cannot fully explain why it generated a specific output, due to the model’s immense complexity.
7. How can small businesses leverage Generative AI?
For drafting marketing emails, creating social media posts, writing product descriptions, and generating ideas for blog content, thereby saving time and resources.
8. Will Generative AI kill artistic jobs?
It will change them. The value may shift from technical execution to creative direction, curation, and prompt engineering. The human artist’s unique vision and emotional intelligence will remain paramount.
9. What is “transfer learning” in this context?
It’s the technique of taking a pre-trained model (like GPT-4) and fine-tuning it on a smaller, specialized dataset (e.g., legal documents) to make it an expert in that domain.
10. Can Generative AI be used for evil purposes?
Unfortunately, yes. It can be used to generate disinformation at scale, create sophisticated phishing emails, or produce deepfakes for malicious purposes. This is a major area of concern for platforms and regulators.
11. How does DALL-E create an image from text?
It uses a diffusion process. It starts with random noise and gradually refines it, step-by-step, guided by the text prompt, until a coherent image emerges.
12. What are “AI ethics” and why do they matter?
AI ethics is a framework for guiding the responsible development and use of AI, focusing on fairness, accountability, transparency, and mitigating harm. It matters to ensure this powerful technology benefits all of humanity.
13. How much does it cost to train a model like GPT-4?
Estimates range from tens to hundreds of millions of dollars, factoring in computational costs, data acquisition, and expert salaries.
14. Can I run a powerful Generative AI model on my own computer?
Yes, but only smaller, open-source models. Running a model as large as GPT-4 requires immense computational resources only available in large data centers.
15. What is “conversational context” and how does ChatGPT maintain it?
It refers to the model’s ability to remember what was said earlier in the conversation. The Transformer architecture allows it to pay “attention” to all the previous tokens in the chat to generate a relevant response.
16. How will this technology impact sectors like finance?
It can be used for generating financial reports, analyzing market sentiment, and personalizing financial advice. For a deeper dive into managing money, see this Personal Finance Guide.
17. What is the role of Reinforcement Learning from Human Feedback (RLHF)?
RLHF is a training technique used to fine-tune models like ChatGPT. Human trainers rank different responses, and the model is rewarded for generating responses that humans prefer, making it more helpful and aligned with human values.
18. Are there any environmental concerns with Generative AI?
Yes, training large models consumes a significant amount of energy, contributing to a carbon footprint. The industry is actively working on improving efficiency.
19. How can I start a career in Generative AI?
Backgrounds in computer science, data science, and machine learning are common. However, roles like “Prompt Engineer” and “AI Ethicist” are emerging that require diverse skills.
20. What is the future of search engines with Generative AI?
Instead of just providing a list of links, search is evolving to provide direct, summarized answers generated by AI, as seen with Google’s SGE and Bing Chat.
21. Can AI be creative?
This is a philosophical debate. AI can produce novel combinations and styles based on its training, but whether this constitutes true creativity or is just advanced mimicry is an open question.
22. How can nonprofits use Generative AI?
They can use it to draft grant proposals, create awareness campaigns, and personalize donor communications, allowing them to focus more on their mission. For more, see this Nonprofit Hub.
23. What is “fine-tuning”?
The process of taking a pre-trained general model and further training it on a specific dataset to make it an expert in a particular domain or style.
24. Where can I learn more about the societal impact of such technologies?
Our Culture & Society category explores these very questions. For other perspectives, World Class Blogs also offers great insights.
25. I have a specific question not covered here. How can I ask?
We love hearing from our readers! Please feel free to Contact Us with your questions. For a wealth of additional information, you can also explore Sherakat Network’s Resources.
Discussion: What’s been your most surprising experience with generative AI? Have you had any “this changes everything” moments or concerning discoveries? Share your stories below—we’re all learning this together.