All Case Studies Design Development Interviews Machine Learning Project Management

🔥 Human face of technology #19

The newsletter took on a bit of a different form today – let me know what you think of it.

🤯 What could Facebook do with their main product to sidestep some problems?

In the previous edition of my newsletter, I discussed why I’d deleted my Facebook account. I reckon that to maintain some intellectual rigour, we need to think about how Facebook could change their flagship product to solve at least some of their problems. The main issues with Facebook I’ve been able to define are: optimising the product with the aim of sustaining the highest possible engagement (addicting users), amassing boundless volumes of data, a lax policy for sharing the accumulated data with other businesses, and propagating untruths and manipulations.

If Facebook were to try and solve the above problems, many of the changes they would need to implement would likely hurt their core business, at least in the short term. The changes might not have been implemented yet for this very reason. Alternatively, no changes might have been implemented because they would entail an entirely different model of measuring Facebook’s business indicators and setting goals – and a different organisational culture as a result. As an impartial observer, I can opt not to engage with those business issues and let myself conjure up any solutions I want. And that’s precisely what I will do now.

Verifying the identity of all users. One of Facebook’s main problems, which had an impact on the US presidential election, were the fake accounts managed by people and bots. The issue of fake accounts could be solved (at least in many developed countries) by adding one more step to each user’s onboarding process. In this step, users would be asked to take a picture of their personal ID. Today’s technology offers ways of doing it on a large scale – many fintech startups already verify their users’ identities automatically. It could have a negative impact on Facebook’s growth rates, but it would also be proof that Facebook actually cares that its services are used only by flesh-and-bone human beings.

Paid no-ad version of Facebook. Facebook earns about $24 on each user only by viewing ads to them. I can imagine that there are people in the world who would like to pay for Facebook’s services, and $24 would not constitute an extortionate sum of money to them. From Facebook’s point of view, it would give users a choice: either advertisers target you through the data you leave behind, and thus you use Facebook’s services for free, or you pay for the service and don’t concern yourself that your data are collected and used for the creation of marketing campaigns. The money coming from paid users would make a strong incentive for Facebook to develop their services in a way that adds significant value to the lives of their users instead of just addicting them to the service and collecting more and more data about them. The user would become a paying customer, and this would change a lot. Paywalls do work on many websites, so why wouldn’t they work on Facebook?

Strict verification of the apps using Facebook’s API. Just like in Apple’s ecosystem. Zuckerberg has not given up on the idea of Facebook as a platform for app developers, who would like to create apps that would take advantage of the data collected by Facebook. The Cambridge Analytica scandal was directly caused by this strategy. Should a user consent for the external app to obtain access to their data on Facebook, the app will be capable of collecting and storing those data (including data on some of the user’s Facebook friends). Users do not understand all the complex concepts behind data access – when they click the consent button, they don’t really know what they consent to, and what the consequences might be. It’s not their job to know. Facebook should ensure the adequate security for people using their services and check whether user data are used in exactly the way a developer declared they would be using them. Applications and developers should be subjected to a strict verification process and then regular audits so that the risk of users’ private data being leaked is minimal. Apple do that in their App Store, and I think Facebook could do just the same.

Setting limits on how long data are stored. There’s no easy way of deleting large amounts of data from Facebook in one quick action – old pictures, old status updates, message archives, and so on. You can delete some information, but it’s tedious. You need, for instance, remove every single conversation in Messenger. It isn’t an accident by any means – it was a conscious decision of Facebook’s designers. Facebook doesn’t want us to delete our data. In contrast, one of Slack’s interesting functionalities is the option of setting data retention rates for each channel or person that we speak to. Owing to this function, we can tell Slack to delete messages from a given channel each month or each year. In comparison, Facebook will keep your data forever. If you created your account ten years ago, your old messages, status updates, and pictures still lie there somewhere and can leak in case a security breach occurs. If we take into consideration the fact that many people “grew up” using Facebook, we can see that digital footprint is a real privacy issue. The right to be forgotten constitutes a pressing social problem, and Facebook should tackle it head on – for instance, by making available data retention settings.

Changes in News Feed aimed at getting rid of trolls and misinformation. Facebook should give us the option of switching the News Feed display method from the algorithmically-powered one (the default option) to a reverse-chronological mode so that we always get the newest information on the top. Reason? Refreshing the News Feed to see new information (pull-to-refresh) is one of the mechanisms used to addict people to Facebook, eerily similar to how pulling the lever on a slot machine works. Another change to the News Feed could be showing comments coming only from one’s friends under public posts – thus, trolling would have a far smaller reach. The reach of memes (which could be easily traced with a good Machine Learning model) should be algorithmically curtailed, and their importance should be made smaller than the importance of real pictures (e.g. pictures of users’ families). Notifications should display only the most important information and should allow the user to customise them. These little changes could improve Facebook’s usability, but they could also impact engagement, that is the addiction to scrolling and wasting time on Facebook (which would have an impact on Facebook’s revenue).

The above are but a few ideas for how Facebook could improve. The critical factor, however, that determines what Facebook looks, works, and feels like is that Facebook’s services need to evoke the highest possible engagement. The purpose of engagement is clear: it’s there to make users spend as much time as possible using Facebook, which does not entail that it will be time well spent.


👎🏼 Google’s had a poor year

Google’s had a poor year everywhere except their revenue, which is growing (more slowly but still). Google shares the online ad market practically only with Facebook. Only thanks to Facebook, in fact, Google cannot call this year disastrous when it comes to PR – for Google’s each PR scandal, Facebook had five of their own. Google’s CEO, Sundar Pichai, can thank Zuckerberg for all his support.

Let’s have a recap of what uncool things happened at Google this year.

YouTube is an even more efficient manipulation machine than Facebook. YouTube’s algorithms will rapidly take us to the darkest realms of conspiracy theories and manipulations. Internet activists call YouTube the greatest radicalisation machine of the 21st century.

Project Maven conducted jointly with the Pentagon, which analysed data coming from military drones did not come to Google’s employees’ liking – 3,000 of them signed a letter of protest.

Google was fined by the EU for their monopolist practices. The EU is very sensitive to any actions aimed at restricting competition, not only from the point of view of the consumer (Google’s products usually come free of charge, so consumers are not negatively affected financially) but also from the point of view of businesses. The EU does not like it when one company puts the existence of other companies in peril with their actions. In 2018, Google was fined €5bn for installing Chrome and Google Search automatically on each Android phone. A year earlier, the European Commission fined Google for anti-competitive practices they used to fight with price-comparison websites. In total, Google has already been fined more than €7.5bn, and their penalties were the highest that the EU has levied on any technological company. As a means of comparison, Intel has been fined €1bn and Microsoft €2.1bn.

Google was working on a censored version of Google Search for the Chinese market. I covered why this is a problematic development in the previous issue of this newsletter.

Google has been turning a blind eye to cases of sexual harassment, which has caused what might have been the biggest protest in a technological company: about 20,000 Google’s employees staged a walkout. The rally is bound to have demonstrated to big-tech employees that they can organise themselves, and that their actions can have an impact on the decisions taken in their companies. Certainly, we have witnessed the birth of a new quality – some version of trade unions, which this industry has not seen before.

Last but not least, the core of all Google’s problems: the incessant collecting of data on their users. Thanks to Android, Google has access to really detailed and sensitive user data. Google’s business model is based on exploiting these data to target advertisements effectively, and it is unlikely that Google will limit their data intake by themselves. I’m curious to see how this intake becomes limited by external regulators. To my mind, privacy will become a more and more important topic. In Europe, we have the GDPR, which were given a warm welcome by consumers, but don’t go far enough in my opinion. In the US, there’s no equivalent of GDPR, and, in effect, there are no virtually no regulations regarding the collection and processing of user data. This will become an essential ingredient of all political discussions in the coming years, and this issue will sooner or later be regulated for the sake of consumers’ privacy. It will be a welcome development because we deserve privacy.

zdjecie

personal assistant robot netguru
9 features of a modern office - check if your office has them all
READ ALSO FROM Newsletter
Read also
Need a successful project?
Estimate project or contact us