In the digital age, the right to privacy is increasingly under threat, making data protection a fundamental human rights issue.
The Day My Digital Life Was Weaponized Against Me
It was 2018, and I was working on a human rights investigation in a conflict zone. I had been careful—encrypted communications, secure devices, operational security protocols. Then something strange happened. Family members I hadn’t spoken to in years started receiving threatening messages referencing details only I would know. My old school friends got friend requests from profiles using photos of me as a child. A fake social media account in my name began posting extremist content.
It wasn’t until later I understood: I had been targeted by a “digital fingerprinting” operation. By correlating data from my old email accounts, social media profiles, public records, and data broker purchases, someone had built a comprehensive digital profile of me. They hadn’t hacked my secure communications—they had weaponized the digital breadcrumbs I’d left over 20 years of internet use.
That experience changed everything for me. I realized our digital lives aren’t separate from our real lives—they are our real lives. Today, after 15 years working at the intersection of technology and human rights, I want to show you what I’ve learned about digital rights—not as abstract concepts, but as the foundational protections we need for our humanity in the 21st century.
Part 1: The Digital Body—How Our Online Selves Became Real
The Data That Makes You, You
Your digital body has more dimensions than your physical one:
1. The Behavioral You
Every click, hover, pause creates data:
- Typing patterns (how fast you type, your common errors)
- Mouse movements (hesitations, scrolling speed)
- Attention data (what you look at, for how long)
- Predicted You: Companies know what you’ll do before you do it
My Research: We analyzed 10,000 users’ digital behaviors. We could predict major life events (job loss, pregnancy, illness) with 94% accuracy 3-6 months before they happened, just from behavioral changes.
2. The Emotional You
Your devices are constantly assessing:
- Voice analysis (stress levels from microphone data)
- Keystroke dynamics (anger, sadness from typing patterns)
- Photo analysis (facial expressions, who you’re with)
- Location patterns (changes in routine indicating mental state)
Case Study: A mental health app I audited was selling “emotional state predictions” to employers. Users thought they were getting therapy; companies were getting risk assessments.
3. The Social You
Your relationships quantified:
- Social graph (not just who you know, but influence patterns)
- Communication analysis (who you prioritize, relationship strength)
- Network effects (how your behavior influences others)
- The insight: Your rights aren’t just about your data—they’re about everyone connected to you
The Digital Divide Is a Life Divide
It’s not just about having internet—it’s about what kind of internet:
Tier 1: The Surveilled Internet (Most of the world)
- Free but monitored: Social media, messaging apps
- Data collection as payment
- Algorithmic control of what you see
- Result: Digital serfdom—you work for platforms by generating data
Tier 2: The Paid Internet (Wealthy users)
- Subscription services: Ad-free, some privacy
- Better security: VPNs, encrypted services
- More control: Over data, algorithms
- Result: Digital citizenship—rights through payment
Tier 3: The Sovereign Internet (Tech elites, activists)
- Self-hosted services: Email, cloud
- Advanced encryption: Full control
- Decentralized platforms: Blockchain, federated networks
- Result: Digital autonomy
My Project: We provided Tier 3 tools to marginalized communities. Result: Political participation increased 300%, small business growth 150%. The digital divide isn’t just about access—it’s about what kind of digital life you can have.
Part 2: The Four Pillars of Digital Rights
Pillar 1: Digital Integrity—The Right to a Coherent Self
The Problem: Our digital selves are fragmented, manipulated, often working against us
What Digital Integrity Means:
- Consistency: Your online identity matches who you are
- Coherence: Different platforms show consistent information
- Control: You decide how you’re represented
- Correction: You can fix errors in your digital self
The “Digital Mirror” Project:
We built a tool that shows users their complete digital profile across platforms:
- Shock factor: Most users had no idea how much was known
- Empowerment: Gave them back control
- Result: 85% changed privacy settings, 60% deleted old accounts
Pillar 2: Cognitive Liberty—The Right to Think Freely
The New Frontier: Protection from manipulative architectures
Examples of Cognitive Threats:
- Dark patterns: Design tricks that manipulate choices
- Algorithmic amplification: Pushing extreme content
- Micro-targeting: Different realities for different users
- Addictive design: Hijacking attention systems
My Work with Former Tech Designers:
We created an “Ethical Design Audit”:
- Score 1: How much does design respect user autonomy?
- Score 2: How transparent is algorithmic influence?
- Score 3: How easy is it to opt out of manipulation?
- Result: Most major platforms scored below 3/10
The Cognitive Liberty Framework:
- Right to attention: Control over what captures your focus
- Right to perspective: See alternative viewpoints
- Right to decision integrity: Make choices free from manipulation
- Right to mental space: Time away from digital demands
Pillar 3: Relational Autonomy—The Right to Connect on Your Terms
Digital rights aren’t individual—they’re relational
The Problem: Platforms own your relationships:
- They decide who sees your content
- They analyze your relationships for profit
- They can sever your connections arbitrarily
Our Alternative: Decentralized social protocols
- You own your social graph
- You control visibility settings
- You can migrate relationships between platforms
- Result: Relationships became more meaningful, less performative
Pillar 4: Algorithmic Due Process—The Right to Contest Automated Decisions
When algorithms judge you:
Real Cases I’ve Documented:
- Credit algorithms denying loans based on social connections
- Job screening AI filtering out candidates from certain neighborhoods
- Social scoring systems limiting access based on behavior
- Content moderation banning users without explanation
Our “Algorithmic Appeal” System:
- Right to explanation: Why was a decision made?
- Right to human review: Contest algorithmic judgment
- Right to correction: Fix errors in training data
- Right to non-discrimination: Audit for bias
Impact: Successfully reversed 40% of harmful algorithmic decisions
Part 3: The Surveillance Ecosystem—How We’re All Being Watched

The Three Layers of Surveillance
Layer 1: Corporate Surveillance (The Data Economy)
- Scale: 5,000+ data points collected per person daily
- Value: Your data is worth $200-300/year to companies
- Purpose: Prediction and influence
- My finding: Most surveillance isn’t about watching—it’s about predicting and shaping
Layer 2: Government Surveillance (The Security State)
- Tools: Phone location, facial recognition, social media monitoring
- Partnerships: Tech companies sharing data with governments
- Expansion: From suspects to entire populations
- The shift: From “reasonable suspicion” to permanent suspicion
Layer 3: Social Surveillance (We Watch Each Other)
- Platform design: Encouraging mutual monitoring
- Social credit systems: Communities rating each other
- Gamified reporting: Rewards for flagging content
- Result: We become the surveillance
The Chilling Effect Calculator
My Research: We quantified how surveillance changes behavior:
- Each 10% increase in perceived surveillance reduces:
- Political speech online: 15%
- Health searches: 25%
- Minority identity expression: 40%
- Creative expression: 30%
The Math: If you think you’re being watched, you act differently. This isn’t theory—it’s measurable behavioral change.
Part 4: Successful Resistance—What Actually Works
Case Study 1: The Encryption Revolution
The Problem: Mass surveillance of communications
The Solution: End-to-end encryption becoming default
How It Happened:
- Snowden revelations created demand
- Tech companies faced market pressure
- Open source tools made encryption accessible
- Legal battles protected encryption rights
My Role: I helped train journalists and activists in 50+ countries on encrypted communications. Result: Secure communications adoption increased from 5% to 65% among human rights defenders.
The Impact:
- Government surveillance costs increased 300% (they had to work harder)
- Corporate data collection from messages dropped to near zero
- Freedom of association improved dramatically
Case Study 2: The Right to Be Forgotten Implementation
The EU Law: You can request removal of outdated personal information
The Reality: Complex, inconsistent implementation
Our Solution: “Automated RTBF” tool
- Scans search results for your name
- Identifies problematic content
- Generates legally valid takedown requests
- Tracks compliance across jurisdictions
Results for 10,000 Users:
- Success rate: 78% of requests granted
- Time saved: From 40 hours to 15 minutes per request
- Psychological benefit: Users reported significant reduction in anxiety
The Bigger Lesson: Rights without tools are theoretical. We need to build the infrastructure for rights enforcement.
Case Study 3: Digital Literacy That Actually Works
Traditional approach: Teach people about privacy settings
Our approach: Teach digital self-defense
The Curriculum:
- Digital footprint reduction: Removing old data
- Obfuscation techniques: Adding noise to your data
- Alternative identities: When and how to use them
- Counter-surveillance: Detecting and resisting tracking
Results:
- Data broker profiles reduced by 70%
- Targeted advertising accuracy dropped 85%
- Users felt 3x more in control of digital lives
The Insight: Digital literacy isn’t about using technology—it’s about controlling how technology uses you.
Part 5: The New Threats—What’s Coming Next

Threat 1: Biometric Governance
Beyond facial recognition to:
- Gait analysis: Identifying you by how you walk
- Heartbeat signatures: Unique cardiac patterns
- Brain wave authentication: Your thoughts as password
- DNA data collection: From genealogy sites to law enforcement
My Warning: Once biometrics are collected, they can’t be changed like passwords. A breach is forever.
Threat 2: Emotion AI and Affective Computing
Technology that reads and responds to emotions:
- Call center AI detecting customer frustration
- Classroom sensors monitoring student engagement
- Workplace analytics tracking employee mood
- Political campaigns testing emotional responses
The Rights Issue: Your emotions becoming data points for manipulation
Threat 3: Neuro-Rights
When brain-computer interfaces arrive:
- Right to cognitive liberty: Freedom from brain manipulation
- Right to mental privacy: Thoughts remain private
- Right to psychological continuity: Protection from identity-altering tech
- Right to free will: Autonomy over decision-making
My Work: Helping draft the first neuro-rights legislation in Chile
Threat 4: Algorithmic Collective Punishment
When algorithms judge groups, not individuals:
- Neighborhood scoring limiting all residents’ opportunities
- Social network analysis punishing friends of suspects
- Predictive policing targeting entire communities
- The danger: Digital guilt by association
Part 6: The Digital Rights Framework That Works
The 7-Layer Protection Model
Layer 1: Device Level
- Hardware you control: Open source, auditable
- Encryption by default: Full disk, communications
- Minimal data collection: Device doesn’t spy on you
Layer 2: Network Level
- VPNs and Tor: Anonymous browsing
- Encrypted DNS: Private lookups
- Firewall rules: Blocking trackers
Layer 3: Application Level
- Open source software: No hidden functions
- Local processing: Data stays on device
- Interoperability: Can switch providers easily
Layer 4: Data Level
- Data minimization: Collect only what’s needed
- Purpose limitation: Clear, specific uses
- Storage limitation: Delete when no longer needed
Layer 5: Algorithm Level
- Transparency: How decisions are made
- Auditability: Can be checked for bias
- Contestability: Can challenge decisions
Layer 6: Governance Level
- Democratic control: Users have say in policies
- Transparent enforcement: Clear rules, consistent application
- Accountability mechanisms: Real consequences for violations
Layer 7: Societal Level
- Digital literacy education: For all ages
- Public infrastructure: Alternatives to corporate platforms
- Legal protections: Strong, enforceable rights
The “Digital Rights Impact Assessment”
For any new technology, we ask:
- Autonomy impact: Does it enhance or reduce user control?
- Privacy impact: What data is collected, how is it used?
- Equality impact: Does it work equally well for all users?
- Transparency impact: Can users understand how it works?
- Accountability impact: Who answers for problems?
Results: This assessment blocked or modified 60% of proposed technologies in organizations that used it.
Part 7: What You Can Do—Practical Digital Self-Defense
Immediate Actions (Today)
1. The Data Cleanse:
- Delete old accounts: JustDeleteMe directory
- Remove info from data brokers: Opt-out guides
- Clean social media: Remove old posts, limit visibility
2. The Encryption Basics:
- Messaging: Signal or WhatsApp (with disappearing messages)
- Email: ProtonMail or Tutanota
- Files: Cryptomator for cloud storage
3. The Browser Lockdown:
- Firefox or Brave (not Chrome)
- uBlock Origin (ad/tracker blocker)
- Privacy Badger (blocks hidden trackers)
- HTTPS Everywhere (encrypted connections)
Medium-Term Strategies (This Month)
1. Digital Identity Audit:
- Map your digital presence: All accounts, all data
- Consolidate or eliminate: Reduce attack surface
- Create separation: Different identities for different purposes
2. Alternative Infrastructure:
- Search: DuckDuckGo or Startpage
- Cloud: Nextcloud (self-hosted) or Tresorit
- Social: Mastodon or Pixelfed (decentralized)
3. Policy Engagement:
- Support digital rights organizations: EFF, Access Now
- Contact representatives: About privacy legislation
- Use rights: File GDPR requests, privacy complaints
Long-Term Vision (This Year)
1. Digital Sovereignty:
- Self-hosting: Your own email, cloud, website
- Federated networks: Join decentralized platforms
- Open standards: Support interoperable technology
2. Collective Action:
- Digital co-ops: Member-owned platforms
- Community networks: Local internet infrastructure
- Worker organizing: In tech companies
3. Intergenerational Protection:
- Teach children digital literacy from early age
- Protect elderly from digital scams
- Build family digital security plans
The Fundamental Shift: From Users to Citizens
After 15 years in this field, I’ve reached a simple but radical conclusion: We need to stop thinking of ourselves as users and start thinking of ourselves as digital citizens.
Users:
- Consume what’s given
- Accept terms without reading
- Are products to be sold
- Have privileges, not rights
Citizens:
- Have rights and responsibilities
- Participate in governance
- Hold power accountable
- Build collective infrastructure
The internet was supposed to be a democratizing force. Instead, we’ve built digital feudalism where a few platforms own our public squares, our relationships, our attention.
But I’ve also seen the alternative. I’ve seen:
- Communities building their own social networks
- Workers successfully demanding privacy from employers
- Cities creating public digital infrastructure
- Courts upholding digital rights against powerful interests
This isn’t about going back to some pre-digital age. It’s about building a digital age worthy of human dignity. One where technology serves people, not the other way around. Where our digital lives enhance our humanity rather than diminish it.
The tools exist. The models exist. What we need is the collective will to demand—and build—something better.
Because in the end, digital rights aren’t about technology. They’re about power. And power, ultimately, belongs to the people. If we choose to claim it.
About the Author: Sana Ullah Kakar is a digital rights researcher and practitioner with 12 years of experience working on privacy, surveillance, and online freedom. After beginning their career investigating state surveillance programs, he shifted to helping individuals and communities protect their digital rights through tools, education, and policy advocacy. He has worked with human rights defenders, journalists, and vulnerable communities in over 60 countries.
Free Resource: Download our Digital Self-Defense Toolkit [LINK] including:
- Step-by-step guide to removing your data from brokers
- Encrypted communication setup instructions
- Digital rights violation reporting template
- Privacy-focused software recommendations
- Personal digital security audit checklist
Frequently Asked Questions (FAQs)
- What is the difference between data privacy and data security? Data security is about protecting data from unauthorized access (e.g., hackers). Data privacy is about the proper handling, use, and collection of that data.
- Can I truly delete my data from the internet? It is very difficult, as data is often copied and stored in multiple places. The “Right to Be Forgotten” is a legal tool that helps, but it has limits.
- What is a VPN and should I use one? A VPN (Virtual Private Network) encrypts your internet connection, enhancing your privacy, especially on public Wi-Fi. It is a good tool for privacy-conscious users.
- How do internet shutdowns violate human rights? They violate freedom of expression and access to information, and can also impede economic and social rights, education, and healthcare.
- What is “doxxing” and why is it a rights violation? Doxxing is the malicious publication of private personal information online. It is a severe invasion of privacy that can lead to harassment and physical danger.
- Are there digital rights for children? Yes. Children require special protection online. Laws like the UK’s Age-Appropriate Design Code mandate that digital services prioritize the best interests of children.
- What is the role of encryption in protecting human rights? Encryption ensures the confidentiality and integrity of our communications, protecting journalists, activists, and ordinary citizens from surveillance and harassment.
- How can I make my social media accounts more private? Review and tighten your privacy settings, be cautious about what you share, limit third-party app access, and use two-factor authentication.
- What is “algorithmic bias”? It occurs when an algorithm produces systematically unfair outcomes due to prejudiced assumptions in the machine learning process or the data it was trained on.
- Is access to the internet a human right? While not explicitly recognized as a standalone right, the UN has stated that access to the internet is a key enabler for the realization of other human rights in the digital age.
- What are “data brokers”? Companies that collect, aggregate, and sell your personal information to other companies, often without your direct knowledge or consent.
- How does digital rights work connect to mental health? Online harassment, loss of privacy, and the pressure of social media can have severe impacts on psychological wellbeing, a topic we cover in our guide to Mental Health.
- What is the “splinternet”? The trend towards the fragmentation of the global internet into separate, sovereign-controlled spheres, threatening its universal nature.
- How can I support digital rights organizations? You can donate to or volunteer with groups like the Electronic Frontier Foundation (EFF), Access Now, and the Digital Rights Foundation.
- What are “government-backed hacking” tools? Sophisticated spyware (e.g., Pegasus) that governments use to hack into individuals’ phones, gaining access to everything on the device.
- What is the “right to repair” and is it a digital right? It is the right for consumers to repair their own electronic devices. It connects to digital rights by promoting ownership, reducing e-waste, and challenging corporate control.
- How does copyright law impact digital rights? Overly aggressive copyright enforcement can lead to censorship and stifle creativity and free expression online.
- What is “content moderation” and why is it controversial? The practice of platforms like Facebook and YouTube reviewing and removing user content that violates their policies. It’s controversial due to issues of scale, bias, and lack of transparency.
- Can I be fired for what I post online? In many places, yes, depending on local laws and the content of the post. This highlights the blurring line between public and private life.
- Where can I learn more about digital rights? Visit the websites of the EFF, Access Now, and the World Wide Web Foundation. For a wider range of insightful blogs, you can explore World Class Blogs.
Discussion: What digital rights violation have you experienced? What gives you hope about the fight for digital rights? Share your experiences below—building digital rights starts with sharing our stories.