Technology & Policy ·
Banning Isn’t the Answer. But What Is? The Case for Holding Platforms Accountable for What They Build
India’s state governments — from Karnataka to Punjab — are reaching for social media bans to protect children. The impulse is understandable. The solution is insufficient. The real conversation needs to be about platform design, not platform prohibition.
Something significant is happening in India’s state assemblies and it is worth paying attention to. Over the past several months, governments of widely different political persuasions — Karnataka, Andhra Pradesh, Goa, Punjab — have all begun moving in the same direction on the same issue: the damage that social media platforms are doing to children.
When politicians from different parties, in different regions, start saying the same thing, it usually means one of two things: either a moral panic is in the air, or a genuine problem has crossed a threshold that can no longer be ignored. In this case, the evidence suggests the latter.
This year’s Economic Survey framed social media addiction as a health challenge, identifying compulsive use among young Indians as a driver of anxiety, depression, low self-esteem, sleep disturbances, reduced concentration, and poorer academic performance. These are not anecdotes. They are documented public health outcomes.
The question, though, is whether the tools being reached for — bans, restrictions, age limits — are actually capable of solving what they are being aimed at. And the honest answer is: probably not on their own. Banning technology is not the solution. But the status quo — in which platforms set their own rules, design their own products, and absorb the consequences of both — also cannot continue.
What Children Are Actually Being Exposed To
The harms are not hypothetical. Children on social media platforms encounter a consistent mix of damaging content and damaging dynamics:
- Sexualised material surfaced by recommendation algorithms that young users did not search for
- Influencer-driven content promoting unrealistic body standards, lifestyle expectations, and aspirational consumption
- Manipulative trends that spread virally through youth-heavy spaces
- Cyberbullying and pile-on dynamics that are often algorithmically amplified rather than dampened
- Predatory contact from adults who exploit the openness of these platforms
- Generative AI-enabled harms — synthetic content, deepfakes, and misleading material that is increasingly indistinguishable from real
Platforms have introduced responses — teen accounts, parental controls, content filters — and these have real value. But they have not reversed the trend. The harms have continued to grow even as the tools supposedly addressing them have multiplied. Something structural is wrong, and structural problems require structural responses.
The Real Problem: Design, Not Just Content
For most of the history of online child safety, the focus has been on content: removing harmful posts, blocking dangerous accounts, flagging illegal material. These interventions matter. But they address symptoms. The deeper problem — the one that keeps producing new symptoms faster than they can be removed — is design.
What Is “Engagement-Maximising Design”?
The Business Model That Keeps Children Online Longer Than They Should Be
Social media platforms make money by keeping users on the platform for as long as possible — because more time on platform means more advertising revenue. Their products are therefore deliberately designed to maximise “engagement,” which is a neutral-sounding word for the combination of habits, emotions, and compulsions that make it hard to put the phone down. Infinite scroll prevents natural stopping points. Notification systems create anxiety about what you might be missing. Recommendation algorithms prioritise content that provokes strong emotions — outrage, envy, excitement — because strong emotions drive clicks and shares. For adults, this is a problem of attention and wellbeing. For developing brains in children and teenagers, it is a public health issue. The product was not designed with children’s welfare in mind. It was designed for engagement. The two goals are often in direct conflict.
This means the conversation needs to shift from “what content should be removed?” to “what design choices should not be permitted in the first place?” The distinction matters enormously. Content moderation is reactive — it addresses harm after it has already been created and seen. Design accountability is preventive — it changes the conditions under which harm becomes likely.
What Platforms Should Actually Be Required to Do
The practical changes are not technically difficult. They are commercially inconvenient — which is why platforms have not made them voluntarily, and why waiting for voluntary action has not worked. The following changes to platform design for youth-facing environments should become a baseline expectation, not an optional enhancement:
A Design Accountability Baseline for Youth Platforms
1. Intentional Friction:
Platforms should make it harder for teens to stay online too long. This includes:
-
Time Limits: Enforcing breaks that can’t be easily ignored.
-
Night Mode: Automatic restrictions late at night when harmful usage peaks.
2. Limits on Virality
Stop content from spreading “at light speed” for under-18s. By slowing down algorithmic boosts and sharing features, platforms can stop dangerous trends before they spiral out of control.
3. Human-First Moderation
AI isn’t enough to catch the nuance of teen interactions. High-risk areas need real people who understand context and can respond quickly to complex safety issues.
4. “Sandbox” Safety Testing
New features (like AI tools or new feed designs) shouldn’t be tested on the public. They must be vetted in controlled, external environments before they are released to millions of kids.
5. Real Community Input
Design shouldn’t happen in a vacuum. Platforms must give a seat at the table to:
-
Parents & Teens
-
Educators & Experts
-
Mental Health Professionals
The Bottom Line: Social media companies should treat youth safety as a deliberate design choice, not an afterthought. This means prioritizing well-being over engagement metrics.
Why India Needs a Dedicated Digital Safety Regulator
The scale of the problem in India is specific and significant. India has one of the largest and youngest online populations in the world. Enforcement of existing platform obligations is uneven. And as different states begin responding to the crisis in different ways — some with bans, some with advisories, some with school policies — the regulatory picture is becoming increasingly fragmented. A child in Karnataka faces different rules than a child in Uttar Pradesh on the same platform.
What is needed is a specialised digital safety regulator — not a general technology oversight body, but one specifically focused on risks to minors. Such a body would have three core functions:
Monitoring — tracking harms systematically, across platforms, with the authority to demand meaningful data disclosures about how content reaches children, how harmful trends spread, and what design features are operating in youth spaces.
Review — independent assessment of high-risk design features before they are deployed at scale in youth-heavy environments.
Enforcement — meaningful consequences when platforms fail to meet safety obligations, not just guidance and voluntary commitments that carry no penalty for non-compliance.
The regulator’s mandate should focus not only on content takedowns — which remain important — but on the broader design conditions that make harm more likely. A platform that removes harmful posts while maintaining the design architecture that spreads them at scale is not solving the problem. It is managing its optics.
The Moment We Are In
The state-level moves toward social media bans are, at their root, an expression of institutional frustration. Governments have watched platforms self-regulate for years, watched the harms grow, watched the tools offered in response prove insufficient, and have now reached for the bluntest instrument available. The ban is not a solution. It is what happens when no one offers a better one.
The moment now is to offer that better one. The technology is not going away, and nor should it — the internet is genuinely valuable to young people, including for learning, connection, creativity, and expression. The goal is not to keep children off platforms. It is to make those platforms safer to be on — through design choices that platforms should have made voluntarily, through regulatory frameworks that require them to make those choices, and through institutional oversight that can tell the difference between safety theatre and safety.
For children in India who are spending an average of five hours a day on their phones — surrounded by content they did not choose, shaped by algorithms optimised for someone else’s revenue — the question is whether the adults responsible for their welfare are going to keep having the same argument about bans while the system that harms them remains unchanged. The answer to that question will define what kind of digital childhood the next generation actually has.