On Parenting & Age Verification
Lately, a disturbing number of countries have decided that so-called "age verification" is the best way to protect children from the dangers of the internet, specifically by forcing all minors under a certain age (usually 15 or 16) off of social media platforms.
To be fair, the internet is a vast place and many digital spaces are not appropriate for children. But experts universally agree that requiring everyone to show ID is an overly-simplistic Band-Aid of a "solution" that will ultimately do more harm than good.
Unfortunately, I have seen a disappointing number of privacy enthusiasts decide that the proper response to this latest privacy incursion is to blame the parents. The idea seems to be that if "parents would just parent," then nobody would be pushing for this in the first place (like governments and companies have ever needed an excuse to violate our rights). I believe, however, that this is a misguided and unhelpful reaction that puts blame on the wrong parties, ultimately fixing nothing and letting the real villains continue to make everything worse.
Parents Aren't Lazy
The "parents should parent" narrative is objectively flawed. It hinges on the premise that most parents (or at least a significant number) are - to some extent - negligent and not trying hard enough to look after their kids. But the numbers don't support this. According to the National Center for Education Statistics, in 2023:
- 71% of parents report "often" or "sometimes" helping their kids with homework
- 68% of parents attended at least one school-related meeting each year
- Only 23% of families reported not having any shared family activities like a weekly game night or meal together.
According to Pew Research, parents also reported that they place a high emphasis on teaching their kids about things like financial independence, mental health resilience, and good values/character (90%, 84%, and 88% respectively).
The digital world paints a similar picture. 70% of parents claim to actively monitor their teen's online activity (such as checking posts and messages), with 50% saying they even go so far as to check their child's phone (typically with the child's knowledge). According to another Pew survey, 47% of parents set time limits on phone use, and 4 in 10 parents report regular arguments with their kids about appropriate amounts of screen time. While I'm not sure how I feel about going through your kid's phone, all this suggests to me that parents do care about this stuff. You don't actively monitor and argue about things you don't care about.
Admittedly though, these statistics suggest parents are only half as an engaged with their kids' digital lives compared to "meatspace." Another study from 2021 suggested that only half of parents use digital tools to enforce children's online behaviors (such as website-blocking or filtering software).
Why are parents who otherwise so engaged in their children's lives falling short on the digital part of things?
The Cobra Effect
An apocryphal story says that in the days of British Colonial India, the Brits decided something should be done about the all venomous snakes everywhere so they offered a bounty for every dead snake brought to them. Many of the enterprising locals decided that rather than risk injury catching wild snakes, they'd simply breed venomous snakes in captivity and turn them in for an easy payday. Once the Brits realized what was happening, they cancelled the program, and the locals - who's snake breeding operations had reached a near-industrial scale - simply let their "crop" loose. The result was even more snakes than before.
This is said to have coined the phrase "the Cobra Effect," better known in economics as "perverse incentive." It can be (ironically) over-simplistically defined as "unintended consequences of overly-simplistic solutions."
The truth is that protecting kids online is a complicated, nuanced thing. The internet can be dangerous and harmful, but it can also be helpful. Consider YouTube: it can be a wealth of high-quality, educational content on any topic you can imagine, but there's also no shortage of content that's unsuitable for children.
Slapping an age-gate on the internet - in part or whole - risks creating a Cobra Effect. Issuing a broad proclamation that certain sites are unsuitable for children - and deciding which sites those are - is an overly simplistic solution that opens a lot of potential for even worse harms like censorship, isolating children from positive communities and influences, and blocking genuinely helpful resources (I remember growing up in an age where overzealous parental controls censored harmless words or blocked legitimate websites as false positives).
Furthermore, it strips parents of their right to decide how to raise their kids. Many red states in the US have openly admitted they want to use these laws to censor content about reproductive rights and the LGBTQ community, so it's not hard to imagine political censorship going in the other direction eventually to block conservative content.
In the same vein, the privacy community's reaction that "parents should parent" is overly reductive.
A lot of non-parents seem to have forgotten what it was like to be young. They seem to think they can just say "I'll never give my kid a phone" or "I'll never allow them to use Facebook" and that will be that, and thus any child who comes to any kind of online harm can only be the result of lazy, negligent parents who slap an iPad in front of their kids 24/7 while the parents go off and do... I dunno, meth? I have no idea what stereotypes of parents these kinds of people have in their heads.
But parents know that children aren't obedient little clones. They're people just like you and me, complete with their own ideas, dreams, interests, and more. Of course, that doesn't make them right. Children have far less experience, wisdom, and knowledge of the world (both "book" and "street" smarts), but ignorance has never stopped people from having opinions or beliefs.
People (of any age) will always act in accordance with their ideas, not external rules. Piracy is illegal, and yet I see droves of people in my comment section defending it as "moral." Probably the same people who hypocritically think that their future kids will mindlessly obey their draconian house rules "because I said so" and definitely not make any effort to get around them. (Of course, those kinds of non-parents also probably think they're too clever to ever be outsmarted by their future kids. We'll talk about hubris another time.)
Sarcasm aside, there's also the social impacts of deciding to take such a hardline approach with your children. The Privacy Dad has written at length about parenting and privacy, including an interview with one of his kids and another about group chats with other parents, wherein he talked about the importance of making sure your kids can find friends and fit in. While there's something to be said for resisting peer pressure, it's also important for a child's mental health and psychological development to find a sense of belonging and acceptance, to find "their people."
The idea that many (mostly non-parents) seem to have that a properly attentive parent can simply force their kids away from common privacy invasions like social media and smartphones without rebellion or risk is straight-up delusional. Some kids will naturally have no interest in electronics or social media, and that will make protecting them a lot easier, but forcing that lifestyle on kids who want a digital existence will almost certainly result in making them social pariahs, building resentment in them, and causing them to flout their parents' authoritarian rules by doing things behind their backs, like having secret phones or social media accounts (I have personally witnessed young people do both of these things on multiple occasions over the years). Even if they don't rebel at the time, parents are leaving them woefully unprepared for how to handle all this stuff when they grow up, and I wouldn't be surprised if more than a few of them someday go "no contact," at least for a few years.
Ironically, privacy advocates who hold these beliefs are creating their own Cobra Effect, forcing an overly-simplistic solution onto a nuanced problem - using a hatchet instead of a scalpel - and risking worse harm than what they're trying to defend against. Just like the age verification laws they're so angry about.
(Also ironically, many of these same privacy advocates have a plethora of stories about how they themselves have bypassed various rules from governments, parents, ISPs, and many others. Children are notorious for finding workarounds to restrictions, yet somehow many privacy enthusiasts forgot all that and decided that the only logical solution is that parents are just being lazy and willfully negligent.)
As the statistics showed in the first section, most parents are trying to parent. Now, of course, I don't want to take all the responsibility off the parent. To do so would also be to take away their agency as parents - their freedom to decide when is best to introduce their kids to the internet, for example. With that agency comes the responsibility to be involved, to know what your kids are doing online, to put up appropriate guard rails for them, and raise them to know the risks and how to use this stuff correctly. But clearly parents are trying, so placing the blame mostly on them is a false cause fallacy.
So where does this leave us?
Misplaced Blame
The environmental movement is a fraud. Sometimes.
Don't get me wrong: there is overwhelming, irrefutable evidence that human-made climate change is a serious problem. What I mean is that large corporations have nailed the playbook for shifting the responsibility for problems they cause onto consumers, and then continuing their harmful practices completely unabated while we're all distracted blaming each other for not doing "our part."
Take for example, the moral panic over plastic straws back around 2015. Plastic pollution is bad and seems to be only getting worse with each new study about microplastics. Eight million tons of plastics flow into the oceans every year, but plastic straws make up only .025% of that.
Somehow, someone somewhere sold us all on the idea that our plastic straws were killing the planet and you're worse than Hitler if you ask for one with your weekly "treat yourself" frappe while corporations dump literal tons of it in the ocean every day regardless if you went to Starbucks or not that morning.
Now again, to be clear, I'm not opposed to making the world a better place. I'm all in favor of making things cleaner, more sustainable, and more accessible. But this was just one example of an obvious play by corporations to deflect scrutiny and responsibility onto us while they continued to do infinitely more damage. (I'm not saying the entire thing was made up by them, but they sure went along with it and no doubt benefited.) There's been several campaigns like this over the decades in the US alone.
Here's where I'm going with this: the "parents should just parent" narrative is exactly the same.
There is ample evidence that the internet is harming all of us, not just children. In the second Pew survey I cited earlier, it had some additional statistics about how even the parents struggle to use tech responsibly with 47% of parents saying they themselves feel like they spend too much time on their phones and 31% admit to being distracted by their phones when talking to their teens (teens rated that number closer to 46%).
And yet, the narrative that social media bans and age verification laws sell is that "the internet is bad for kids." This is yet another logical fallacy wrapped up in clever marketing (since anyone pushing back can be easily painted as a monster who doesn't care about children) but it's also objectively false and shrinks the scale of the problem. If technology was only bad for kids, then it would stand to reason that we need to find a way to keep children away from it. However, if technology - in it's current incarnation - is bad for everyone, that requires us to change our focus entirely and look for a different solution, perhaps looking at the tech itself and the companies behind it. After all, if the entire team is underperforming you replace the coach, not all the players. And Big Tech will do literally anything to avoid laws standing in the way of their profits.
In the past, we have regulated entire industries like airlines, food, and medicine rather than simply age-gating them because the issues that plagued them impacted everyone. I argue that this is the true solution we need to fix the problem that age-gating attempts to solve: we need to regulate tech companies to reign in their algorithms. We already have overwhelming evidence that tech in it's current form is purposely addictive, divisive, and intentionally spreads rage and disinformation for the sake of making another buck. Tech companies don't care if what you see on their platform is true or how it makes you feel, as long as it keeps you scrolling so they can serve more ads.
That regulation will not be simple. It will require nuance, discussion, and trial-and-error. Age-gating, on the other hand, is very simple, clean, and easy while also making for great sound-bites you can run in your re-election campaign commercials. The difference is that only one of these works. Do we want actual safety? Or just the illusion of it?
The New Oil is supported by our audience. If you're getting value out of our work, please consider supporting us.
Apes. Together. Strong.
In addition to alienating potential allies, blaming parents isn't just unproductive. It's counterproductive.
George Carlin probably said it best:
That’s the way the ruling class operates in any society: they try to divide the rest of the people; they keep the lower and the middle classes fighting with each other so that they, the rich, can run off with all the money.
Long-time readers know that I am deeply in favor of improving the overall level of tech literacy in society. I am very much on board with teaching parents what tools are out there to help them better raise their kids in today's increasingly complex and integrated digital world (more on that in a moment). And obviously, parents who actually are negligent should be held accountable. But again, to beat a dead horse, the numbers show that most parents are making an honest, good faith effort.
When it comes to the cracks in digital parenting efforts that kids slip through, I believe that in most cases, the overwhelming majority of blame needs to be placed squarely on the shoulders of tech companies. In today's society, it's common for both parents (assuming a child is fortunate enough to have both parents) to work full-time, in addition to cooking, commuting, errands, and trying to raise their kids across a variety of contexts like school, in-person activities, general safety and wellbeing, and more. Expecting someone in that situation to be able to defend against multiple full-time experts who spend 40+ hours a week designing the most addictive and rage-, anxiety-, and depression-inducing platforms the world has ever seen without the help of regulation is like expecting your average white-collar employee to hold his own in a fight against a professional MMA athlete - they're hopelessly outmatched.
We don't expect parents to be doctors and chemists to vet the safety of a cough medicine for their children, or biologists to pick safe food from the supermarket. We have laws requiring generally-available, over-the-counter products to be safe for most people when used as intended. Social media algorithms, however, are functioning exactly as intended when users "doomscroll." It's the digital equivalent of selling poison over the counter. So why then are we unrealistically expecting parents to be sysadmins to keep their kids safe online? Especially when the digital world can - in many cases - be acting maliciously. How is the average parent supposed to preemptively defend against all these attacks? It would be a challenge for even the most experienced, tech-savvy individual to navigate.
But as long as we keep placing the blame more on the parents than the tech companies, we're just repeating the playbook. The tech companies will be happy to keep supporting age-verification initiatives (as long as it's not in their backyard), claiming they also want the internet to be safer for children, while continuing to push their harmful algorithms that hurt everyone, including children. To call these "blatant lies" is to be too diplomatic. Tech companies have a long history of knowingly collecting data on children. They don't care about the kids. They have never cared and will never care about the kids. They just care about making sure you don't pay too close of attention and realize what the real problem is. Then you might want real solutions, and that would be bad for profits.
Tech companies claiming they support a safer internet is like foxes claiming they support a safer henhouse.
Solutions
Now that I'm done ranting and casting stones, it's time to get off my soapbox and focus on some actionable solutions. Rather than needlessly villainizing parents, here's some things I think we can focus on that will move the needle in a positive direction.
- Get political. If the solution is to reign in Big Tech, then that requires political action. The easiest and best way to get involved in that venture is to know what laws are currently being proposed and to contact your representatives to either support or oppose current proposed legislation, or express your opinion that legislation should be focused on the tech companies instead of the consumer. You can use the following sites to find who your politicians are and what laws they're currently considering:
- Representatives
- Congress.gov
- House.gov
- Senate.gov
- CommonCause.org
- USA.gov
- Proposed Laws
- BillTrack50
- GovTrack
- FastDemocracy
- Legiscan
- (International readers should feel free to suggest resources in the comments.)
- Educate everyone on the harms of identity verification. Politicians will largely cave to public pressure in most cases, so while contacting your representative is a critical piece of the puzzle it's equally important to help create public pressure. We need to educate everyone who will listen on why age verification proposals are flawed, the harms they will inevitably cause, and get them to push for better solutions with us.
- Educate Parents on tools. Most of my readers know that parents have a wide range of tools they can use to help protect their kids online, and not just the privacy-invasive spyware stuff masquerading as parental blocks or child safety tools. Surprisingly though, many parents are totally unaware of any of these. Help educate the parents you know about things like DNS-blocklists, the built-in parental controls on Apple and Android devices, setting online accounts to child or teen accounts, and other technical but user-friendly tools parents can use to help their children. (For example, offer to help them set up Mullvad's DNS on their router instead of complex solutions like Pi-hole. Keep it as simple as possible.) Be sure to talk to parents to understand their exact concerns and how to best address them. A DNS blocklist won't stop strangers from contacting their kid online, but it will stop them from going to most known porn sites. Meanwhile, setting the child's account to an actual child-designated account in the settings will likely put up friction for strangers to contact their kid. You can help them research what these tools offer and if they're a good solution or not.
- Parents: Consider Using AI. Okay look, I know AI is really controversial but one thing AI is really good at is synthesizing complex questions. A big problem for many people is "not knowing what you don't know." AI can be a great tool for asking a question and using the answer as a starting point for knowing where to look. You could ask it things like "how can I keep my kid from accessing porn on Reddit?" or "what harmful things might my ten-year-old encounter on Roblox?" Just remember to fact-check it and avoid revealing any personal information like names, dates of birth, or other information. You can see my list of recommended AI chatbots for better privacy here.
Parting Thoughts
Age verification is an example of what one of my friends calls "nerd harder." It's what happens when someone thinks that a problem is technical in nature, and therefore we can find a solution if we just "nerd harder" - write better code, or more code, or different code, or [insert some technological fix here]. But obviously, not all problems are technical problems or have technical solutions. Signal, for example, cannot defend against a device that's infected with keyloggers or screenshotting malware, nor someone taking a photo of the screen with another device, nor even just simply showing their screen to someone else or telling them what was said in the chat. There are no technical solutions there. No amount of "nerding harder" will ever fix that. Telling parents to simply "parent harder" is the same. Even for the most attentive and offline families, it doesn't take a lot of screen time for Big Tech companies to start tracking kids or for kids to accidentally stumble across something unsafe. Lord knows I've found worse with a lot less effort. There are only so many technical solutions we can put in place to defend against this stuff.
I also find it interesting that many tech enthusiasts inadvertently discredit themselves or downplay their hard work by implying that everyone should be able to easily acquire the same skills we've spent years honing. I've been "into" privacy since about 2016. Since that time, I've learned how to install a custom operating system onto my phone and router; self host my own office suit (Nextcloud), social media, and media server; and many smaller skills like changing the DNS, switching to private or encrypted services, and using Linux. None of these skills are beyond the reach of anyone reading this post, but I remember what it was like to be scared of screwing something up and wondering how easily reversible the damage would be. This stuff takes time to learn and practice - even the simple stuff - and implying that parents should just be able to spend 5 minutes on YouTube and do it perfectly and confidently on the first try is both tone-deaf and self-deprecating, downplaying all the hard work we've put in to get here.
Finally, I want to end by reinforcing the need to reign in Big Tech. In the past few years, I've heard a lot of people compare the growing pains of modern tech (especially social media) with the transition from yellow journalism to our modern journalistic landscape. The comparison is that once upon a time, a new disruptive technology came on the scene and ran wildly unchecked, causing all sorts of problems until legislation reigned it in to become something that benefited society. The internet has become a vital lynchpin in our modern world, with everyone depending on it in some form or another. It's well past high time to start acknowledging the important role it plays today and start forcing it to serve society as a whole rather than just a few oligarchs are the top at the expense of everyone else. It won't be a sexy, easy, catch-phrase Band-Aid. It will require nuance and patient fine-tuning, but it will be a much more effective and long-term solution to many problems.
Tech changes fast, so be sure to check out our website for all the latest recommendations, tools, services, and more.
Comments ()