top of page
  • Writer's pictureEvan Warfel

Regulations to Reduce Screen Addiction And Improve User’s Experiences

Updated: Sep 25, 2019

Five Human Tech Policy Proposals


In 1833, near the tail end of the first industrial revolution, the British government passed the Factory Act, which made it illegal to do then-profitable things like employ kids younger than 9 and employ 9–13-year-olds for more than 9 hours a day.


Today, a handful of tech ethicists and non-profits are pushing for the concept of consumer-facing tech regulation. However, I have seen very few specific proposals that address screen addiction, and so I’ve written up five proto-policies below. At a high level, they can be summarized as:


1. Ensure that the timing of notifications isn’t in any way “gamed”

2. Allow people to pay for products and not see ads

3. Allow users to have distinct settings for notifications from personal connections, and notifications from machine-generated actions

4a. Allow users to have control over the quality of recommendations

4b. Communicate and enforce the veracity of recommendations


I like to think that if tech companies (or their shareholders) were interested in protecting their long-term brand reputation, they’d implement the ideas I’ve outlined here. However, I am not counting on the industry to self-regulate, so the proposals include a light discussion of some legal considerations.


Note: This is far from an exhaustive list. I imagine some of the dark design patterns could be included as well. Feel free to contact me with ideas for more regulations, feedback on the existing ones below and general suggestions.


 


1. Timing and grouping of Notifications

Issue: Machine Learning algorithms that power social media newsfeeds may learn that if they “time” the delivery of notifications correctly, they end up making the app “more addictive.” Thus the notifications may be gamed to keep user’s attention from waning.


Potential policy: Legally require that users be able to make the following choices about when notifications should be “delivered”:


A) in a business as usual fashion, wherein it’s up to the platform provider to determine when to deliver the notifications

B) in near-real-time (within, say, ~5–10 seconds of when they happen)

C) during a user-determined time window

D) in user-defined groups or batches.


For example, when she logs on, Sasha Smithsonian has the option of snoozing all notifications until 5–7pm each day for a given social media site. (With the exception of behaviors that require real-time interaction, like online conversations). In addition, she has set her platform to batch her notifications in groups of 5.


Ensuring that users could have these options is easy enough to enforce. However, making sure that notifications aren’t gamed anyways will take some effort. But just as Attorneys General employ investigators, they might also employ forensic technologists who can help determine these things.


To enforce choice (B), perhaps a log consisting of the distribution of the discrepancy between the time of initiated action and time of notification can be made public or registered with a government agency. In addition, a browser extension could enable investigators or users to (agree to do this beforehand and then) log or screen capture when they invite a friend to an event, and then said friend can log or screen capture when they receive the invitation, for example.


2. Paying for a no-ad product

Issue: People can’t remove digital ads in certain cases, even if they want to.


Potential policy: Make the following a legal requirement that for a website or app that monetizes via ads: users must be given the option to not see ads via paying for the product.


Enforcement Mechanism: For apps, assuming the aforementioned idea is eventually signed into law, one enforcement mechanism would be to allow Attorneys General to hold website/app owners, as well as App Stores (or app store’s parent companies) liable for the apps they distribute.


Additionally, apps and websites could be personally or programmatically evaluated for whether they are likely to be a) running ads, and b) whether they offer a paid option that does, in fact, shut off the ad displays


Then again, maybe the best case for enforcement would involve the creation of something like the FCC or Consumer Financial Protection Bureau for technology.*


*(I had a half-baked short story idea a while back, about how if Federal Reserve were in charge of regulating tech apps, every time the economy needs a boost they can lower interest rates as well as adjust parametric legal requirements related how “addictive” certain apps and websites can be…)


A possible extension to this policy would be to require companies who primarily make money via ad-monetization to cap the amount that could charge in terms of X times the median user value in ad sales if they aren’t adding any more customer value other than “not displaying ads.” (i.e. no premium features, etc.) To help enforce this data consisting of the monthly ad revenue and the monthly number of users could be filed with a government agency. Similar to how the SEC requires all hedge-funds over a certain size are required to report their quarterly stock positions via 13F filings.


3. Distinguish between meaningful and non-meaningful notifications

Issue: All of us want to be notified when other people interact with us or our created content. Certain platforms have co-opted this desire and mix in notifications of “happenings users might be interested in but don’t directly affect them.” I suspect one reason for this is that it’s in companies best interests to have some of the notifications be less meaningful, as doing so makes the reward of a good notification seem more unpredictable and thus “addictive.”


Proposed Solution: Make it easy for users to define “direct” and “indirect” notifications, and then require that users be able to have fine-grained control over both. For example, a user might deem all “In case you missed X” or “While you were away, Z happened” as cases of indirect notifications.


Enforcement Mechanism: A fine seems to be a good idea here; possibly hold app stores and distributors liable for apps that don’t make this distinction.


4a. Quality of Recommendations

Issue: Users may not want the best recommendations all the time — like if they are browsing youtube after midnight, and they find that they can’t bring themselves to quit directly.


Most recommendation systems work by, at some point, ranking the things being recommended in terms of some definition of “best.” If we take dating apps, for example, some dating apps may rank recommendations via “profile similarity” or “predicted compatibility” whereas for others it might be “similarity along dimensions of preferences.” And provided things haven’t changed too much, the Youtube recommender uses two neural networks: one to generate recommendation candidates for each user, and another to rank the said candidates.


Proposed Solution: For whatever definition of “best” is being used to rank recommendations, allow users to control whether they want to see things from the top, middle or bottom third of the list.


Enforcement Mechanism: Forensic data scientists who are given access to logs of recommendation and user behavior.


4b. Veracity of Recommendations

Issue: It is sometimes in a company’s best interests to give users the second-best recommendation.


For example, a dating app that limits the number of free matches a user can see may not “want” to always return the best matches. If they return the “second best” matches, they might convince the user that “wow, this dating thing almost works! I’m just around the corner from finding my life partner… Let me pay for the premium version.” Private companies are within their rights to do this kind of thing; I just think they should be explicit and transparent about what they are doing.


Potential Solution: Communicate the quality of recommendations and matches to users, and have such communication be verified by a neutral third-party (of forensic data-scientists and engineers). In the example above, companies would likely see increased revenue if they were to be more transparent about how they could return “higher quality” recommendations or matches.

Recent Posts

See All
bottom of page