Friday, December 13, 2024

How To Solve The Big Tech Problem Withiout Violating Anyone's Rights (Updated Re-Post)

"Big Tech is the new Big Tobacco" is often bandied about these days.  And while that has a kernel of truth to it (a kernel the size of a cornfield, in fact), it is also used by authoritarian zealots with a very illiberal (and ageist) agenda.  Mandatory age verification, censorship, repealing Section 230, and other related illiberal restrictions would open up the door to many unintended consequences to privacy, cybersecurity, and civil rights and liberties in general.  Even those adults who don't support youth rights will eventually experience these consequences sooner or later.  Kafka, meet trap.  Pandora, meet box.  Albatross, meet neck.  And of course, baby, meet bathwater. 

And none of these things will actually solve the collective action problem of Big Tech and the "Social Dilemma".  But here are some things that will, in descending order of priority and effectiveness:

  1. First and foremost, take a "Privacy First" approach as recommended by the Electronic Frontier Foundation (EFF).  Pass comprehensive data privacy legislation for all ages that, at a minimum, would ban surveillance advertising, and ban data brokers too.
  2. Audit the algorithms and internal research of the Big Tech giants, and make the results publicly available for all to see.  Sunlight is truly the best disinfectant. 
  3. Require the strictest and safest privacy settings to be the default settings for all users of all ages, which can then be adjusted more liberally by the users themselves.  For example, "friends only" sharing and "no DMs enabled from people whom one does not follow" by default.  And allow the option to turn off all DMs completely as well.
  4. Require or incentivize the use of various "architectural" safety features on all social media, such as various nudges, #OneClickSafer ("stop at two hops") to reduce the pitfalls of frictionless sharing, and increase the use of CAPTCHAs and similar tools to root out the pervasive toxic bots.
  5. If after doing that, We the People feel that we must still get stricter in terms of age, then don't make things any stricter than current California standards (i.e. CCPA and CAADCA).  That is, a "Kids Code" would be fine as long as it is properly written and doesn't result in censorship or mandatory age verification. 

The first two items on the list in particular would of course be vehemently opposed by Big Tech.  That's because their whole business model depends on creepy surveillance advertising and creepy algorithms, and thus incentivizing addiction for profit.  They would thus have to switch to the (gasp!) DuckDuckGo model if these items were done.  (Plays world's smallest violin) That would of course be tantamount to throwing the One Ring into the fires of Mount Doom, in J.R.R Tolkien's Lord of the Rings.

For another, related collective action problem, what about the emerging idea of phone-free schools?  Fine, but to be fair, how about phone-free workplaces for all ages as well?  In both cases, it should ONLY apply while "on the clock", which for school would be best defined as being from the opening bell to the final bell of the day, as well as during any after-school detention time.  And of course, in both cases, there would have to be medical exemptions for students and employees who need such devices for real-time medical monitoring (glucose for diabetes, for example).  Surely productivity would increase so much as a result that we could easily shorten the standard workweek to 30-32 hours per week (8 hours for 4 days, or 6 hours for 5 days) with no loss in profits?  But that would make too much sense.

Other good ideas we would endorse are a voluntary smartphone buyback program (similar to gun buybacks), and perhaps even paying people to voluntarily delete or deactivate their social media accounts for a time. That would accomplish far more than any realistic mandatory measures would.

Another possible idea is simply to slow down by design the pace of these social media platforms.  Much like #OneClickSafer mentioned above, adding a little bit of friction to an otherwise frictionless system can help tame the very real dark side of that system.  I mean, would you willingly drive on a frictionless surface (such as ice)?  Of course you wouldn't.

Note that internet connection speeds are more than ten times faster (!) today on average than in 2010.  That leaves a LOT of room for adding back friction!

And finally, the idea of banning certain questionable design features (infinite scroll, autoplay, etc.) may be controversial in terms of whether such features are protected by the First Amendment, but we believe that those features per se are not automatically protected, unless the ban is deliberately abused to censor specific content.  If such bans are truly content-neutal, we are fine with that. 

We must remember that, at the end of the day, Big Tech is NOT our friend.  But neither are the illiberal control freak zealots.  These measures that we endorse will actually make both sides quite angry indeed.  But truly that's a feature, not a bug.

Big Tech can go EFF off!

UPDATE:  We have opposed KOSA until recently due to censorship concerns, and while those concerns have been somewhat alleviated with recent edits to the bill, we still cannot say we support it 100%.  But for now, we have dropped our opposition to the bill, if for no other reason than to forestall more restrictive bills (like Australia's new law) in the future, and thus the TSAP and Twenty-One Debunked is currently neutral on KOSA despite it still not being ideal.

No comments:

Post a Comment