# The Social Dilemma



## MrBlackhill (Jun 10, 2020)

That's a documentary on Netflix. It just got out today (September 9th) and I've watched it. It's fully in line with my point of view about social media, internet, tech, AI. *A must-watch.* Every kid should see this at school starting right now. My future kids will have to watch this 10 times before I allow them to use internet for the first time.

The documentary is more than the social media situation. It is about how we are getting manipulated by AI algorithms running all over internet and gadgets we use. It is also about fake news propaganda and conspiracy. Those algorithms are learning from every single action the billions of users are doing and they are then influencing us not for the best of the society, but for the goal they were designed for and that goal is all about money. The algorithms don't know if their actions are for the best of the users, they have no clue about that concept. They are just learning what's working when asked to maximise the profits, no matter what it is.

Personally, I deleted my Facebook account nearly a decade ago. I was already seeing how bad influence it was and how much time people wasted on it. But that was just the beginning... YouTube, Instagram, TikTok, Reddit, Snapchat, Pinterest, Twitter are all trying to gain more users.

But some of them got me. YouTube got me. Who is not using YouTube nowadays? Hopefully you've noticed their "weird selection" of recommended videos.

Instagram also got me. Not as an active user, but as a passive user watching their content. At first, it was just to scan for travel destination ideas and appreciate the great pictures which could be found. But then the algorithm got me into a diversified feed of all my interests. I'm currently reducing my time on that platform since we cannot travel anyways, but my feed is now not only about travel pictures.

But back to YouTube... I can waste so much time on YouTube. I use YouTube as a great source of information because - yes - there is some high quality content on YouTube through all that junk, but yet I am also wasting much more of my time on all the other recommended videos based on my interests and due to my high curiosity.

I've always seen the world being ironically "well balanced". By that, I mean that every great advance in the evolution of technology and humanity also has the other side of the coin. I've always seen how cool the access to information on internet was, but also how dangerous it is.

I don't feel like I'm caught in the rise of social media, but I'm definitely caught in the rise of internet with the quick access to information. In my personal case, I always fear of missing out on an information I'd like to be aware of as early as possible. I love learning and I'm very curious, so I'm reading and reading, and then I end up navigating the internet for hours meanwhile my entrepreneurial projects don't get done.


----------



## MrBlackhill (Jun 10, 2020)

I'll quote a few passages of the documentary as I feel this is very important information. I've tried to put a name to each passage I'm quoting.

*The business model *


> The first 50 years of Silicon Valley, the industry made products – hardware, software, sold to customers, for the last ten years the biggest business is about selling their users. We don’t pay for the products that we use, advertisers pay for the products that we use, advertisers are the customers, we’re the thing being sold. If you are not paying for the product, then you are the product. Their business model is the keep people engaged on the screen. Let’s figure out how to get as much of this person’s attention as we possibly can. How much time can we get you to spend? How much of your life can we get you to give us? Our attention is the product being sold to advertisers. It’s the gradual, slight, imperceptible change in your own behavior and perception that is the product. Changing what you do, how you think, who you are.
> 
> This is what every business has always dreamt of. They sell certainty. In order to be successful in that business, you have to have great predictions. Great predictions begin with one imperative: you need a lot of data. Many people call this surveillance capitalism, capitalism profiting off of the infinite tracking of everywhere everyone goes. It’s a marketplace that trades exclusively in human futures and those markets have produced the trillions of dollars that have made the Internet companies the richest companies in the history of humanity.


*The data gathering*


> What I want people to know is that everything they're doing online is being watched, is being tracked, is being measured. Every single action you take is carefully monitored and recorded. Exactly what image you stop and look at, for how long you look at it. They know when people are lonely. They know when people are depressed. They know when people are looking at photos of your ex-romantic partners. They know what you're doing late at night. They know the entire thing. Whether you're an introvert or an extrovert, or what kind of neuroses you have, what your personality type is like. They have more information about us than has ever been imagined in human history. And so, all of this data is being fed into these systems that have almost no human supervision and that are making better and better and better and better predictions about what we're gonna do and who we are.


*The predictive model*


> People have the misconception it's our data being sold. It's not in the business interest to give up the data. They build models that predict our actions, and whoever has the best model wins. All of the things we've ever done, all the clicks we've ever made, all the videos we've watched, all the likes, that all gets brought back into building a more and more accurate model. The model, once you have it, you can predict the kinds of things that person does. Where you'll go. I can predict what kind of videos will keep you watching. I can predict what kinds of emotions tend to trigger you.


*The business goals*


> At a lot of technology companies, there's three main goals. There's the engagement goal: to drive up your usage, to keep you scrolling. There's the growth goal: to keep you coming back and inviting as many friends and getting them to invite more friends. And then there's the advertising goal: to make sure that, as all that's happening, we're making as much money as possible from advertising. Each of these goals are powered by algorithms whose job is to figure out what to show you to keep those numbers going up.


*The persuasion*


> We've created a world in which online connection has become primary, especially for younger generations. And yet, in that world, any time two people connect, the only way it's financed
> is through a sneaky third person who's paying to manipulate those two people. So, we've created an entire global generation of people who are raised within a context where the very meaning of communication, the very meaning of culture, is manipulation. We've put deceit and sneakiness at the absolute center of everything we do.
> 
> Magicians were almost like the first neuroscientists and psychologists. Like, they were the ones who first understood how people's minds work. They just, in real time, are testing lots and lots of stuff on people. A magician understands something, some part of your mind that we're not aware of. That's what makes the illusion work. Doctors, lawyers, people who know how to build 747s or nuclear missiles, they don't know more about how their own mind is vulnerable. That's a separate discipline. And it's a discipline that applies to all human beings. From that perspective, you can have a very different understanding of what technology is doing. When I was at the Stanford Persuasive Technology Lab, this is what we learned. How could you use everything we know about the psychology of what persuades people and build that into technology? Now, many of you in the audience are geniuses already. I think that's true, but my goal is to turn you into a behavior-change genius. There are many prominent Silicon Valley figures who went through that class and learned how to make technology more persuasive, Tristan being one.
> ...


*The experiments*


> Companies like Google and Facebook would roll out lots of little, tiny experiments that they were constantly doing on users. And over time, by running these constant experiments, you develop the most optimal way to get users to do what you want them to do. It's manipulation. We’re all lab rats.
> 
> Facebook conducted what they called "massive-scale contagion experiments." How do we use subliminal cues on the Facebook pages to get more people to go vote in the midterm elections? And they discovered that they were able to do that. One thing they concluded is that we now know we can affect real-world behavior and emotions without ever triggering the user's awareness. They are completely clueless. We're pointing these engines of AI back at ourselves to reverse-engineer what elicits responses from us. Almost like you're stimulating nerve cells on a spider to see what causes its legs to respond. So, it really is this kind of prison experiment where we're just, you know, roping people into the matrix, and we're just harvesting all this money and data from all their activity to profit from. And we're not even aware that it's happening. So, we want to psychologically figure out how to manipulate you as fast as possible and then give you back that dopamine hit. We did that brilliantly at Facebook. Instagram has done it. WhatsApp has done it. You know, Snapchat has done it. Twitter has done it. It's all of these people understood this consciously, and we did it anyway.
> 
> If something is not a tool, it's demanding things from you. It's seducing you. It's manipulating you. It wants things from you. And we've moved away from having a tools-based technology environment to an addiction- and manipulation-based technology environment. That's what's changed. Social media isn't a tool that's just waiting to be used. It has its own goals, and it has its own means of pursuing them by using your psychology against you.


*The drug*


> There are only two industries that call their customers “users”: illegal drugs and software.
> 
> So, here's the thing. Social media is a drug. I mean, we have a basic biological imperative to connect with other people. That directly affects the release of dopamine in the reward pathway. Millions of years of evolution are behind that system to get us to come together and live in communities, to find mates, to propagate our species. So, there's no doubt that a vehicle like social media, which optimizes this connection between people, is going to have the potential for addiction.
> 
> These technology products were not designed by child psychologists who are trying to protect and nurture children. They were just designing to make these algorithms that were really good at recommending the next video to you or really good at getting you to take a photo with a filter on it. It's not just that it's controlling where they spend their attention. Especially social media starts to dig deeper and deeper down into the brain stem and take over kids' sense of self-worth and identity. We evolved to care about whether other people in our tribe think well of us or not because it matters. But were we evolved to be aware of what 10,000 people think of us? We were not evolved to have social approval being dosed to us every five minutes. That was not at all what we were built to experience. We curate our lives around this perceived sense of perfection because we get rewarded in these short-term signals - hearts, likes, thumbs-up - and we conflate that with value, and we conflate it with truth. And instead, what it really is is fake, brittle popularity... that's short-term and that leaves you even more, and admit it, vacant and empty before you did it. Because then it forces you into this vicious cycle where you're like, "What's the next thing I need to do now? 'Cause I need it back." Think about that compounded by two billion people, and then think about how people react then to the perceptions of others. It's really bad.


*The depression*


> There has been a gigantic increase in depression and anxiety for American teenagers which began right around... between 2011 and 2013. The number of teenage girls out of 100,000 in this country who were admitted to a hospital every year because they cut themselves or otherwise harmed themselves, that number was pretty stable until around 2010, 2011, and then it begins going way up. It's up 62 percent for older teen girls. It's up 189 percent for the preteen girls. That's nearly triple. Even more horrifying, we see the same pattern with suicide. The older teen girls, 15 to 19 years old, they're up 70 percent, compared to the first decade of this century. The preteen girls, who have very low rates to begin with, they are up 151 percent. And that pattern points to social media. Gen Z, the kids born after 1996 or so, those kids are the first generation in history that got on social media in middle school. How do they spend their time? They come home from school, and they're on their devices. A whole generation is more anxious, more fragile, more depressed. They're much less comfortable taking risks. The number who have ever gone out on a date or had any kind of romantic interaction is dropping rapidly. This is a real change in a generation. And remember, for every one of these, for every hospital admission, there's a family that is traumatized and horrified. It's plain as day to me. These services are killing people... and causing people to kill themselves.


*The adaptation*


> There's this narrative that, you know, "We'll just adapt to it. We'll learn how to live with these devices, just like we've learned how to live with everything else." And what this misses is there's something distinctly new here. Perhaps the most dangerous piece of all this is the fact that it's driven by technology that's advancing exponentially. Roughly, if you say from, like, the 1960s to today, processing power has gone up about a trillion times. Nothing else that we have has improved at anything near that rate. Like, cars are, you know, roughly twice as fast. And almost everything else is negligible. And perhaps most importantly, our physiology, our brains have evolved not at all. Human beings, at a mind and body and sort of physical level, are not gonna fundamentally change. We can do genetic engineering and develop new kinds of human beings, but realistically speaking, you're living inside of hardware, a brain, that was, like, millions of years old, and then there's this screen, and then on the opposite side of the screen, there's these thousands of engineers and supercomputers that have goals that are different than your goals, and so, who's gonna win in that game? Who's gonna win?


*The machine*


> When you think of AI, you know, an AI's gonna ruin the world, and you see, like, a Terminator, and you see Arnold Schwarzenegger. You see drones, and you think, like, "Oh, we're gonna kill people with AI." And what people miss is that AI already runs today's world right now. Even talking about "an AI" is just a metaphor. At these companies like Google, there's just massive, massive rooms, some of them underground, some of them underwater, of just computers. Tons and tons of computers, as far as the eye can see. They're deeply interconnected with each other and running extremely complicated programs, sending information back and forth between each other all the time. And they'll be running many different programs, many different products on those same machines. Some of those things could be described as simple algorithms, some could be described as algorithms that are so complicated, you would call them intelligence.


*The algorithm*


> I like to say that algorithms are opinions embedded in code... and that algorithms are not objective. Algorithms are optimized to some definition of success. So, if you can imagine, if a commercial enterprise builds an algorithm to their definition of success, it's a commercial interest. It's usually profit. You are giving the computer the goal state, "I want this outcome," and then the computer itself is learning how to do it. That's where the term "machine learning" comes from. And so, every day, it gets slightly better at picking the right posts in the right order so that you spend longer and longer in that product. And no one really understands what they're doing in order to achieve that goal. The algorithm has a mind of its own, so even though a person writes it, it's written in a way that you kind of build the machine, and then the machine changes itself. There's only a handful of people at these companies, at Facebook and Twitter and other companies... There's only a few people who understand how those systems work, and even they don't necessarily fully understand what's gonna happen with a particular piece of content. So, as humans, we've almost lost control over these systems. Because they're controlling, you know, the information that we see, they're controlling us more than we're controlling them.
> 
> So, imagine you're on Facebook... and you're effectively playing against this artificial intelligence that knows everything about you, can anticipate your next move, and you know literally nothing about it, except that there are cat videos and birthdays on it. That's not a fair fight.
> 
> We were all looking for the moment when technology would overwhelm human strengths and intelligence. When is it gonna cross the singularity, replace our jobs, be smarter than humans? But there's this much earlier moment... when technology exceeds and overwhelms human weaknesses. This point being crossed is at the root of addiction, polarization, radicalization, outrage-ification, vanity-ification, the entire thing. This is overpowering human nature, and this is checkmate on humanity.


*The manipulation of information*


> One of the ways I try to get people to understand just how wrong feeds from places like Facebook are is to think about the Wikipedia. When you go to a page, you're seeing the same thing as other people. So, it's one of the few things online that we at least hold in common. Now, just imagine for a second that Wikipedia said, "We're gonna give each person a different customized definition, and we're gonna be paid by people for that." So, Wikipedia would be spying on you. Wikipedia would calculate, "What's the thing I can do to get this person to change a little bit on behalf of some commercial interest?" Right? And then it would change the entry. Can you imagine that? Well, you should be able to, because that's exactly what's happening on Facebook. It's exactly what's happening in your YouTube feed. When you go to Google and type in "Climate change is," you're going to see different results depending on where you live. In certain cities, you're gonna see it autocomplete with "climate change is a hoax." In other cases, you're gonna see "climate change is causing the destruction of nature." And that's a function not of what the truth is about climate change, but about where you happen to be Googling from and the particular things Google knows about your interests. Even two friends who are so close to each other, who have almost the exact same set of friends, they think, you know, "I'm going to news feeds on Facebook. I'll see the exact same set of updates." But it's not like that at all. They see completely different worlds because they're based on these computers calculating what's perfect for each of them.


*The manipulation of reality*


> The way to think about it is it's 2.7 billion Truman Shows. Each person has their own reality, with their own... facts. Why do you think that Truman has never come close to discovering the true nature of his world until now? We accept the reality of the world with which we're presented. It's as simple as that. Over time, you have the false sense that everyone agrees with you, because everyone in your news feed sounds just like you. And that once you're in that state, it turns out you're easily manipulated, the same way you would be manipulated by a magician. A magician shows you a card trick and says, "Pick a card, any card." What you don't realize was that they've done a set-up, so you pick the card they want you to pick. And that's how Facebook works. Facebook sits there and says, "Hey, you pick your friends. You pick the links that you follow." But that's all nonsense. It's just like the magician. Facebook is in charge of your news feed. We all simply are operating on a different set of facts. When that happens at scale, you're no longer able to reckon with or even consume information that contradicts with that world view that you've created. That means we aren't actually being objective, constructive individuals. And then you look over at the other side, and you start to think, "How can those people be so stupid? Look at all of this information that I'm constantly seeing. How are they not seeing that same information?" And the answer is, "They're not seeing that same information."


*The polarization*


> At YouTube, I was working on YouTube recommendations. It worries me that an algorithm that I worked on is actually increasing polarization in society. But from the point of view of watch time, this polarization is extremely efficient at keeping people online. People think the algorithm is designed to give them what they really want, only it's not. The algorithm is actually trying to find a few rabbit holes that are very powerful, trying to find which rabbit hole is the closest to your interest. And then if you start watching one of those videos, then it will recommend it over and over again. It's not like anybody wants this to happen. It's just that this is what the recommendation system is doing.
> 
> On November 7th, the hashtag "Pizzagate" was born. I still am not 100 percent sure how this originally came about, but the idea that ordering a pizza meant ordering a trafficked person. As the groups got bigger on Facebook, Facebook's recommendation engine started suggesting to regular users that they join Pizzagate groups. So, if a user was, for example, anti-vaccine or believed in chemtrails or had indicated to Facebook's algorithms in some way that they were prone to belief in conspiracy theories, Facebook's recommendation engine would serve them Pizzagate groups. Eventually, this culminated in a man showing up with a gun, deciding that he was gonna go liberate the children from the basement of the pizza place that did not have a basement. This is an example of a conspiracy theory that was propagated across all social networks. The social network's own recommendation engine is voluntarily serving this up to people who had never searched for the term "Pizzagate" in their life.


*The fake news propagation*


> There's a study, an MIT study, that fake news on Twitter spreads six times faster than true news. What is that world gonna look like when one has a six-times advantage to the other one? We've created a system that biases towards false information. Not because we want to, but because false information makes the companies more money than the truth. The truth is boring. It's a disinformation-for-profit business model. You make money the more you allow unregulated messages to reach anyone for the best price. Facebook has trillions of these news feed posts. They can't know what's real or what's true... which is why this conversation is so critical right now. People have no idea what's true, and now it's a matter of life and death.


*The weaponization*


> One of the problems with Facebook is that, as a tool of persuasion, it may be the greatest thing ever created. Now, imagine what that means in the hands of a dictator or an authoritarian. If you want to control the population of your country, there has never been a tool as effective as Facebook. Some of the most troubling implications of governments and other bad actors weaponizing social media is that it has led to real, offline harm. I think the most prominent example that's gotten a lot of press is what's happened in Myanmar. In Myanmar, when people think of the Internet, what they are thinking about is Facebook. And what often happens is when people buy their cell phone, the cell phone shop owner will actually preload Facebook on there for them and open an account for them. And so when people get their phone, the first thing they open and the only thing they know how to open is Facebook. Well, a new bombshell investigation exposes Facebook's growing struggle to tackle hate speech in Myanmar. Facebook really gave the military and other bad actors a new way to manipulate public opinion and to help incite violence against the Rohingya Muslims that included mass killings, burning of entire villages, mass rape, and other serious crimes against humanity hat have now led to 700,000 Rohingya Muslims having to flee the country. It's not that highly motivated propagandists haven't existed before. It's that the platforms make it possible to spread manipulative narratives with phenomenal ease, and without very much money. If I want to manipulate an election, I can now go into a conspiracy theory group on Facebook, and I can find 100 people who believe that the Earth is completely flat and think it's all this conspiracy theory that we landed on the moon, and I can tell Facebook, "Give me 1,000 users who look like that." Facebook will happily send me thousands of users that look like them that I can now hit with more conspiracy theories. Algorithms and manipulative politicians are becoming so expert at learning how to trigger us, getting so good at creating fake news that we absorb as if it were reality, and confusing us into believing those lies. It's as though we have less and less control over who we are and what we believe. We in the tech industry have created the tools to destabilize and erode the fabric of society in every country, all at once, everywhere. You have this in Germany, Spain, France, Brazil, Australia. Some of the most "developed nations" in the world are now imploding on each other, and what do they have in common?


*The manipulation of democracy*


> The manipulation by third parties is not a hack. The Russians didn't hack Facebook. What they did was they used the tools that Facebook created for legitimate advertisers and legitimate users, and they applied it to a nefarious purpose. It's like remote-control warfare. One country can manipulate another one without actually invading its physical borders. Do we want this system for sale to the highest bidder? For democracy to be completely for sale, where you can reach any mind you want, target a lie to that specific population, and create culture wars? Do we want that?
> 
> If everyone's entitled to their own facts, there's really no need for compromise, no need for people to come together. In fact, there's really no need for people to interact. We need to have... some shared understanding of reality. Otherwise, we aren't a country.


*The illusion of control over the technology*


> We are allowing the technologists to frame this as a problem that they're equipped to solve. That is... That's a lie. People talk about AI as if it will know truth. AI's not gonna solve these problems. AI cannot solve the problem of fake news. Google doesn't have the option of saying, "Oh, is this conspiracy? Is this truth?" Because they don't know what truth is. If we don't agree on what is true or that there is such a thing as truth, we're toast. This is the problem beneath other problems because if we can't agree on what's true, then we can't navigate out of any of our problems.


*The programmable human*


> A lot of people in Silicon Valley subscribe to some kind of theory that we're building some global super brain, and all of our users are just interchangeable little neurons, no one of which is important. And it subjugates people into this weird role where you're just, like, this little computing element that we're programming through our behavior manipulation for the service of this giant brain, and you don't matter. You're not gonna get paid. You're not gonna get acknowledged. You don't have self-determination. We'll sneakily just manipulate you because you're a computing node, so we need to program you because that's what you do with computing nodes.


*The existential threat*


> When you think about technology and it being an existential threat, you know, that's a big claim, and... it's easy to then, in your mind, think, "Okay, so, there I am with the phone... scrolling, clicking, using it. Like, where's the existential threat? Okay, there's the supercomputer. The other side of the screen, pointed at my brain, got me to watch one more video. Where's the existential threat?" It's not about the technology being the existential threat. It's the technology's ability to bring out the worst in society and the worst in society being the existential threat. If technology creates mass chaos, outrage, incivility, lack of trust in each other, loneliness, alienation, more polarization, more election hacking, more populism, more distraction and inability to focus on the real issues... that's just society. And now society is incapable of healing itself and just devolving into a kind of chaos. This affects everyone, even if you don't use these products. These things have become digital Frankensteins that are terraforming the world in their image, whether it's the mental health of children or our politics and our political discourse, without taking responsibility for taking over the public square. I think we have to have the platforms be responsible for when they take over election advertising, they're responsible for protecting elections. When they take over mental health of kids or Saturday morning, they're responsible for protecting Saturday morning. The race to keep people's attention isn't going away. Our technology's gonna become more integrated into our lives, not less. The AIs are gonna get better at predicting what keeps us on the screen, not worse at predicting what keeps us on the screen.
> 
> Is this the last generation of people that are gonna know what it was like before this illusion took place? Like, how do you wake up from the matrix when you don't know you're in the matrix


*The regulation of the business model*


> What I see is a bunch of people who are trapped by a business model, an economic incentive, and shareholder pressure that makes it almost impossible to do something else. I think we need to accept that it's okay for companies to be focused on making money. What's not okay is when there's no regulation, no rules, and no competition, and the companies are acting as sort of de facto governments. And then they're saying, "Well, we can regulate ourselves." I mean, that's just a lie. That's just ridiculous. Financial incentives kind of run the world, so any solution to this problem has to realign the financial incentives. There's no fiscal reason for these companies to change. And that is why I think we need regulation. The phone company has tons of sensitive data about you, and we have a lot of laws that make sure they don't do the wrong things. We have almost no laws around digital privacy, for example. We could tax data collection and processing the same way that you, for example, pay your water bill by monitoring the amount of water that you use. You tax these companies on the data assets that they have. It gives them a fiscal reason to not acquire every piece of data on the planet. The law runs way behind on these things, but what I know is the current situation exists not for the protection of users, but for the protection of the rights and privileges of these gigantic, incredibly wealthy companies. Are we always gonna defer to the richest, most powerful people? Or are we ever gonna say, "You know, there are times when there is a national interest. There are times when the interests of people, of users, is actually more important than the profits of somebody who's already a billionaire"? These markets undermine democracy, and they undermine freedom, and they should be outlawed. This is not a radical proposal. There are other markets that we outlaw. We outlaw markets in human organs. We outlaw markets in human slaves. Because they have inevitable destructive consequences. We live in a world in which a tree is worth more, financially, dead than alive, in a world in which a whale is worth more dead than alive. For so long as our economy works in that way and corporations go unregulated, they're going to continue to destroy trees, to kill whales, to mine the earth, and to continue to pull oil out of the ground, even though we know it is destroying the planet and we know that it's going to leave a worse world for future generations. This is short-term thinking based on this religion of profit at all costs, as if somehow, magically, each corporation acting in its selfish interest is going to produce the best result. This has been affecting the environment for a long time. What's frightening, and what hopefully is the last straw that will make us wake up as a civilization to how flawed this theory has been in the first place is to see that now we're the tree, we're the whale. Our attention can be mined. We are more profitable to a corporation if we're spending time staring at a screen, staring at an ad, than if we're spending that time living our life in a rich way. And so, we're seeing the results of that. We're seeing corporations using powerful artificial intelligence to outsmart us and figure out how to pull our attention toward the things they want us to look at, rather than the things that are most consistent with our goals and our values and our lives.


*People interviewed*

Tristan Harris
Google – Former Design Ethicist
Center for Humane Technology – Co-Founder



Tim Kendall
Facebook – Former Executive (Director of monetization)
Pinterest – Former President
Moment – CEO



Jaron Lanier
Founding Father of Virtual Reality
Computer Scientist



Roger McNamee
Facebook – Early Investor
Venture Capitalist
Investor in technology for 35 years



Aza Raskin
Firefox & Mozilla Labs – Former Employee
Center for Humane Technology – Co-Founder
Inventor – Infinite Scroll



Justin Rosenstein
Facebook – Former Engineer
Google – Former Engineer
Asana – Co-Founder
Co-Inventor – Facebook “Like” Button



Shoshana Zuboff, PhD
Harvard Business School – Professor Emeritus
Author – The Age of Surveillance Capitalism



Sandy Parakilas
Facebook – Former Operations Manager
Uber – Former Product Manager



Chamath Palihapitiya
Facebook – Former VP of Growth



Sean Parker
Facebook – Former President



Dr. Anna Lembke
Standard University – School of Medicine
Medical Director of Addiction Medicine



Jonathan Haidt, PhD
NYU Stern School of Business – Social Psychologist
Author – The Righteous Mind: Why Good People Are Divided by Politics and Religion



Randima Fernando
NVIDIA – Former Product Manager
Mindful Schools – Former Executive Director
Center for Humane Technology – Co-Founder



Cathy O’Neil, PhD
Data Scientist
Author – Weapons of Math Destruction



Jeff Seibert
Twitter – Former Executive
Serial Tech Entrepreneur



Bailey Richardson
Instagram – Early Team



Rashida Richardson
NYU School of Law – Adjunct Professor
A.I. Now Institute – Director of Policy Research



Guillaume Chaslot
YouTube – Former Engineer
IntuitiveAI – CEO
AlgoTransparency – Founder



Renée Diresta
Standford Internet Observatory – Research Manager
Data for Democracy – Former Head of Policy



Cynthia M. Wong
Human Rights Watch – Former Senior Internet Researcher



Joe Toscano
Google – Former Experience Design Consultant
Author – Automating Humanity



Alex Roetter
Twitter – Former Senior VP of Engineering


----------



## sags (May 15, 2010)

That genie is well out of the bottle now and I gave up worrying about it. I talk to Alexa (Amazon Echo) like she is a family member now.


----------



## james4beach (Nov 15, 2012)

What bothers me is how kids and teenagers (and adults too) make their lives revolve around corporate-owned platforms. People have become real suckers.

I was a young nerd when the internet was emerging. Friends and I used to exchange web sites, talk on decentralized discussion groups (USENET and IRC). It was a free-for-all but what was great is that no single entity, no corporation or government, controlled it. In fact even when a company disliked what was happening on, say, IRC, there was no way for them to stop it. It was self regulated among the community of people who ran the infrastructure.

It was also difficult for the government to monitor or control that environment. This was community-owned and operated infrastructure. Our discussions and social exchanges took place in a setting that did not have a corporate agenda, nor any marketing.

Fast forward to now. It seems that all of society is obsessed with Twitter, Facebook, Instagram. They are addicted to corporate portals and highly dependent on them. They do their "socializing" within a highly curated, corporate environment that is heavily branded, marketed, etc. Advertising is plastered everywhere, even embedded into the social hangout. The government has tightly integrated with the system due to the central points of contact.

It all makes my skin crawl, it's like something out of a dystopian novel of the future where mega-corporations and governments have inserted themselves into all human socializing. In fact that's exactly what we have.

In fact, governments (intelligence services) are watching just about everything that young people write, and classifying them into databases. So are the large corporations. By the time these people reach adulthood, there will be very accurate data on their political ideas, leanings, buying habits, propensity for criminality, propensity for mental illness, propensity to challenge the government.

Creepy and disgusting!


----------



## james4beach (Nov 15, 2012)

MrBlackhill said:


> That's a documentary on Netflix. It just got out today (September 9th) and I've watched it. It's fully in line with my point of view about social media, internet, tech, AI. *A must-watch.* Every kid should see this at school starting right now. My future kids will have to watch this 10 times before I allow them to use internet for the first time.


This is worth watching. Here's an introduction to it and good discussion. Video is inserted below
"The Social Dilemma:" Lies Spread 6x Faster Than Truth on Social Media


----------



## pwm (Jan 19, 2012)

I watched it yesterday. It's something I was already aware of, but very disturbing none the less. I personally have no accounts at Facebook, Twitter or Instagram and never had any desire to do so, even though I'm an early adopter of computer technology. I was on online forums using dialup modems long before the internet was even available. YouTube is a valuable source of information regarding my photography hobby and I enjoy that, but I seek out my own content rather than accept what YouTube suggests.

At my age of 70+ social media platforms are not affecting me personally to a great extent, however I am deeply troubled when I see my teenage grandsons on their smart phones all day long. How is this affecting their ability to distinguish fantasy from reality and to understand what the truth even means? My daughter and her husband seem unconcerned, and she is very intelligent person with a college degree. I must say I am worried for their future.


----------



## cainvest (May 1, 2013)

james4beach said:


> Fast forward to now. It seems that all of society is obsessed with Twitter, Facebook, Instagram. They are addicted to corporate portals and highly dependent on them. They do their "socializing" within a highly curated, corporate environment that is heavily branded, marketed, etc. Advertising is plastered everywhere, even embedded into the social hangout. The government has tightly integrated with the system due to the central points of contact.


I don't see these new platforms as a major problem, advertising has been in many media outlets for a long time, TV, radio, etc. Sure, they can target the ads better based on your activity but it that really a bad thing? Of course, as with anything, becoming truely obsessed with it is bad.

As long as people can do good critical thinking it'll be ok. Those easily swayed in opinion will continue to be regardless of the platform, TV, word of mouth or social media.


----------



## james4beach (Nov 15, 2012)

pwm said:


> At my age of 70+ social media platforms are not affecting me personally to a great extent, however I am deeply troubled when I see my teenage grandsons on their smart phones all day long. How is this affecting their ability to distinguish fantasy from reality and to understand what the truth even means? My daughter and her husband seem unconcerned, and she is very intelligent person with a college degree. I must say I am worried for their future.


I'm in my 30s and have been working in computer security, including with government agencies. We have looked at these problems as part of our work.

I can tell you that among my colleagues (computer security experts) and especially our government counterparts, most people avoid social media as much as possible and most try to minimize use of their smartphones. The documentary points out that the same is true in Silicon Valley. *People who work closely with these technologies know that they, and their children, should avoid them.*

One should especially avoid using them on your smartphone, since on that platform they collect far too much information about you. The safest place to use social media is on a desktop or laptop computer, something that's stationary and which you occasionally log into.

Contrary to popular wisdom, much of the information being collected about you on the smart phone and apps (Facebook, Twitter, etc) CAN be used to harm you. Nobody seems to care about privacy any more, but this is why privacy is important. The data collected on you can be used to harm you.

And what I think people are not grasping is that this data collection lasts a lifetime. So the harm does not have to happen tomorrow or next year. Data is collected on you for decades. Eventually, if the data is stolen or used by a malicious party, they know your life history. And based on computer security history you can be virtually positive that someone WILL steal your data at some point.


----------



## cainvest (May 1, 2013)

james4beach said:


> Contrary to popular wisdom, much of the information being collected about you on the smart phone and apps (Facebook, Twitter, etc) CAN be used to harm you. Nobody seems to care about privacy any more, but this is why privacy is important. The data collected on you can be used to harm you.


I hear this repeated often but is it really true? Sure for people that really don't have a clue, like posting illegal stuff or saying they are going on vacation when their address is known, it can be used to harm them.


----------



## m3s (Apr 3, 2010)

james4beach said:


> I was a young nerd when the internet was emerging. Friends and I used to exchange web sites, talk on decentralized discussion groups (USENET and IRC). It was a free-for-all but what was great is that no single entity, no corporation or government, controlled it. In fact even when a company disliked what was happening on, say, IRC, there was no way for them to stop it. It was self regulated among the community of people who ran the infrastructure.
> 
> It was also difficult for the government to monitor or control that environment. This was community-owned and operated infrastructure. Our discussions and social exchanges took place in a setting that did not have a corporate agenda, nor any marketing.
> 
> Fast forward to now. It seems that all of society is obsessed with Twitter, Facebook, Instagram. They are addicted to corporate portals and highly dependent on them. They do their "socializing" within a highly curated, corporate environment that is heavily branded, marketed, etc. Advertising is plastered everywhere, even embedded into the social hangout. The government has tightly integrated with the system due to the central points of contact.


@james4beach You should really look into DApps. Decentralized applications are being developed now for the very reason you describe

It's basically in the dial up USENET days where only nerds use it now but there is a lot of development and testing going on especially with Eth2

One of the first DApps on Eth2 will likely be the new method of online socializing without the centralized manipulation


----------



## james4beach (Nov 15, 2012)

m3s said:


> @james4beach You should really look into DApps. Decentralized applications are being developed now for the very reason you describe
> 
> It's basically in the dial up USENET days where only nerds use it now but there is a lot of development and testing going on especially with Eth2
> 
> One of the first DApps on Eth2 will likely be the new method of online socializing without the centralized manipulation


Fascinating, thanks.


----------



## MrBlackhill (Jun 10, 2020)

cainvest said:


> I don't see these new platforms as a major problem, advertising has been in many media outlets for a long time, TV, radio, etc. Sure, they can target the ads better based on your activity but it that really a bad thing? Of course, as with anything, becoming truely obsessed with it is bad.
> 
> As long as people can do good critical thinking it'll be ok. Those easily swayed in opinion will continue to be regardless of the platform, TV, word of mouth or social media.


They've talked about that argument in the documentary.

When I was a kid watching TV and listening to radio, I would have access to only 4 TV channels and each TV channel shows the exact same thing to whoever is watching it. Everybody from the same province watching TV would see the same things. Some ads would be different from city to city, but pretty generic because the TV doesn't know who you are. It's not gathering information about you. TV is just an entertainment tool. Everybody would see the same information on the news and watch the same series. All that content is regulated and published by big entities. I would never see conspiracy theories on TV. I would never see a news channel broadcasting fake news.

With Internet, we know have access to billions and billions of information on demand. That's pretty cool. With YouTube, we know have access to content created all around the world. All that for free. But with social medias like YouTube and such, there are millions of individuals creating unregulated content available to the whole world. And each of those individuals reach millions of people. Add to that the algorithms watching every single thing you do on Internet to build up your own reality of suggestions and ads based on your interests and profile. Our cellphones are always with us, waiting right there in our pockets, waiting to show us some unregulated content specifically chosen for us by an AI algorithm having access to everything we've done using that cellphone, every search query, etc. It's not just about the advertising.

A good critical thinking is not enough when your entire life is made of a reality constructed around reinforcing perceptions. What if I told you that aliens live among us? Can I prove it to you? No. Can you prove me that I'm wrong? No. Most people won't believe that aliens live among us but would that make me a fool to think otherwise? Social media can control your beliefs by suggesting you content adapted to your personality. Now instead of that example with aliens, do the same with the polarisation of the debate about politics, environmental changes, etc.

The major problem is how good these devices are at influencing us using advanced AI technology adapting and learning must faster than what humans can do to evolve against it.


----------



## james4beach (Nov 15, 2012)

cainvest said:


> I hear this repeated often but is it really true? Sure for people that really don't have a clue, like posting illegal stuff or saying they are going on vacation when their address is known, it can be used to harm them.


I really think the data can be used to harm you but you have to "think out of the box" and there are so many examples of the harm it can cause.

Your political and ideological leanings is valuable info. It can identify groups you are a member of, and we never know which direction society should go in the future. You could find yourself in an unpopular, perhaps persecuted group. Maybe even your past history (your ideology) when you were young becomes the problem.

A great example would be a teen or young adult today who feels like a rebel of some kind. Perhaps he's into guns and opposed to lefties, or he's an anarchist and opposed to police. There are TONS of people like this in society, a large % of the youth population. So now you're building up a digital profile with this background. This will follow you around for life and you can't get rid of it.

Eventually, employment screening or other background checking systems will hook into digital profiles. It's just another place the data is sold -- and it's already being used like that. So now the rebellious habits of the young man can result in him having *problems finding a job through his life*.

That happens because the data and profiles linger. I know many guys who were somewhat rebellious when they were young, but cooled off with age. The same is true for MANY people on this board. You broke laws when you were young and did many questionable things. But it gets forgotten because ... there was no digital trace of it. _You've got a clean slate._

Today's system is not so forgiving. Whatever you do as a youth is not forgotten. You are analyzed, the profiles are created and will follow you for life.


----------



## james4beach (Nov 15, 2012)

I can put it another way.

There are posters on this board who, based on the kinds of things they are saying (their values) are having some potentially very damaging profiles built up on them, assuming they use a smart phone + social media. Especially Facebook, since it knows your real name. If they are 60 or 70 years old, not an issue. But if they are younger, they are creating a digital footprint which will make it harder for them to find a job in the future.

Many of you reading this probably have children on Facebook. Are any of them on FB groups where they post highly controversial material? Talk like they do among buddies at the bar (jokes about women, rape, guns, violence, overthrowing government)? Those groups exist, I see them all the time.

You'll notice that Facebook tries to enforce having your real name and rejects fake names. This is because part of their business model is collecting data on real people, and selling it. Potentially very harmful to any of us. Social media systems are also able to map out your associations and figure out your true identity, even if you give a fake name or alias.


----------



## m3s (Apr 3, 2010)

james4beach said:


> You'll notice that Facebook tries to enforce having your real name and rejects fake names. This is because part of their business model is collecting data on real people, and selling it. Potentially very harmful to any of us. Social media systems are also able to map out your associations and figure out your true identity, even if you give a fake name or alias.


Myself and most colleagues have changed our names on FB just so it's harder to look you up. I was on instagram before FB and used it anonymously without connecting it to FB or anything else. One day FB friends started to add me or be suggested to me probably because they connected their FB. I since made a second instagram account using a different email. When I get interviewed for security clearance they had a pretty detailed online profile on me.


----------



## james4beach (Nov 15, 2012)

m3s said:


> Myself and most colleagues have changed our names on FB just so it's harder to look you up. I was on instagram before FB and used it anonymously without connecting it to FB or anything else. One day FB friends started to add me or be suggested to me probably because they connected their FB. I since made a second instagram account using a different email. When I get interviewed for security clearance they had a pretty detailed online profile on me.


Yup. And the detailed online profiles from security clearances and government checks shows that they can look past these privacy countermeasures. That software isn't unique to government. It's all third party software, created in places like Virginia and Israel and available to other industries as well.

When you're using social media, it all gets attached to you. I can't believe that some people think "this is of no consequence" to them in the long term.


----------



## like_to_retire (Oct 9, 2016)

james4beach said:


> There are posters on this board who, based on the kinds of things they are saying (their values) are having some potentially very damaging profiles built up on them, assuming they use a smart phone + social media. Especially Facebook, since it knows your real name. If they are 60 or 70 years old, not an issue. But if they are younger, they are creating a digital footprint which will make it harder for them to find a job in the future.


Interesting. Myself, just like many other boomers, I have not been lured into all the social media sites, although boomers well understand the allure, we choose to not participate, since we see it for what it really is. Young people see this as a lack of understanding..... duh

I don't have accounts at Facebook. Twitter, YouTube, Whats App, WeChat, Instagram, QQ, TikTok, Twitter, Snapchat or LinkedIn. I guess that covers most of the big names?

ltr


----------



## james4beach (Nov 15, 2012)

like_to_retire said:


> I don't have accounts at Facebook. Twitter, YouTube, Whats App, WeChat, Instagram, QQ, TikTok, Twitter, Snapchat or LinkedIn. I guess that covers most of the big names?


Yeah I think that covers it. Good list!

I watch YouTube but never log in. It's a huge mistake to log in! Better to use these things anonymously.

WhatsApp should be ok though I think? You are talking directly with other people, I think.


----------



## cainvest (May 1, 2013)

james4beach said:


> I really think the data can be used to harm you but you have to "think out of the box" and there are so many examples of the harm it can cause.
> 
> Your political and ideological leanings is valuable info. It can identify groups you are a member of, and we never know which direction society should go in the future. You could find yourself in an unpopular, perhaps persecuted group. Maybe even your past history (your ideology) when you were young becomes the problem.
> 
> ...


I think you're grasping a bit here but a tiny point taken. About 99% of people I know on social media (myself included) contain nothing that will haunt us in the future, well unless discussion/pictures about food and recipies, regular activities like walking,hiking,bike riding,etc become something bad to be used against you. Yes, some "clueless people" will post questionable content, especially to gain likes and subscribers, but those are few and far between.

The vast majority of companies would have no access to online data (unless you gave it to them) and even requesting it (or gathering it themselves) might be considered illegal for them to use. Sure, some very high profile or high level security jobs might request access to your online accounts but that's unlikely for 99.99% of us.


----------



## cainvest (May 1, 2013)

james4beach said:


> You'll notice that Facebook tries to enforce having your real name and rejects fake names. This is because part of their business model is collecting data on real people, and selling it. Potentially very harmful to any of us. Social media systems are also able to map out your associations and figure out your true identity, even if you give a fake name or alias.


Really? I have different names on all my social media accounts, never the same and never real. Also my FB account is not my real name, never had a problem with that. Where did you come up with FB rejects fake names?


----------



## james4beach (Nov 15, 2012)

cainvest said:


> The vast majority of companies would have no access to online data (unless you gave it to them) and even requesting it (or gathering it themselves) might be considered illegal for them to use. Sure, some very high profile or high level security jobs might request access to your online accounts but that's unlikely for 99.99% of us.


Even retailers buy this kind of data. There are different levels of access to the data but the entire business model is about selling the data


----------



## cainvest (May 1, 2013)

james4beach said:


> Even retailers buy this kind of data. There are different levels of access to the data but the entire business model is about selling the data


Do you have examples of this data being sold that would be *associated directly* *with my real name*?


----------



## james4beach (Nov 15, 2012)

cainvest said:


> Do you have examples of this data being sold that would be *associated directly* *with my real name*?


Yes. Just because someone buys anonymous data from one place, doesn't mean they can't figure out the real identities of the people. This is a famous piece of work on this topic:





__





Anonymity and the Netflix Dataset - Schneier on Security







www.schneier.com





And that was 13 years ago! Even back then, anonymous viewing data from Netflix could be linked up with real peoples names. The technology has advanced a lot since then.

I think we have to consider everything we do online to be traceable to our real identities. Even if you have a fake Facebook name or Twitter pseudonym, through correlation research a lot of it can likely be traced back to you. It's not just a capability of government spy agencies, though they were the ones who first jumped on it.

Part of the problem here -- what worries me -- is that these new data based techniques are so advanced and it's hard to envisage how they might apply this in the future.


----------



## cainvest (May 1, 2013)

james4beach said:


> Yes. Just because someone buys anonymous data from one place, doesn't mean they can't figure out the real identities of the people. This is a famous piece of work on this topic:
> 
> 
> 
> ...


But the people used their real names, see bolded below!
_
They did not reverse the anonymity of the entire Netflix dataset. What they did was reverse the anonymity of the Netflix dataset for those sampled users who also entered some movie rankings, *under their own names*, in the IMDb _

What is so shocking about that?



james4beach said:


> And that was 13 years ago! Even back then, anonymous viewing data from Netflix could be linked up with real peoples names. The technology has advanced a lot since then.
> 
> I think we have to consider everything we do online to be traceable to our real identities. Even if you have a fake Facebook name or Twitter pseudonym, through correlation research a lot of it can likely be traced back to you. It's not just a capability of government spy agencies, though they were the ones who first jumped on it.
> 
> Part of the problem here -- what worries me -- is that these new data based techniques are so advanced and it's hard to envisage how they might apply this in the future.


I'm sure if someone did a deep digital dive into my online presence they could figure out who I am but what's that going to get them?


----------



## james4beach (Nov 15, 2012)

cainvest said:


> I'm sure if someone did a deep digital dive into my online presence they could figure out who I am but what's that going to get them?


They might have all kinds of motivations you're not thinking of.

There's a member on this board who works for an employment verification service, based on his profile. Maybe he's collecting data, correlating it to people, and selling it. I don't like the idea of our casual conversations here affecting job prospects, but it may be happening.

You're thinking of "a deep digital dive" as manual effort, but everything is automated. A few years ago, I met a _very_ creepy woman who had some software that she used to track and correlate people to online identities. She showed me a system where she could input an online pseudonym and it produced a list of possible identities for the person.

It seems she had been tracking me and had figured out one of my online pseudonym associations. I immediately killed that online profile and stopped using it, but it shows what can happen. And I was being pretty careful at the time. It's not like I was going around saying who I am in real life.

She was a political person and was doing this for political purposes. Really nasty stuff... Cambridge Analytica kind of stuff. I'll bet she's still stalking me today and has me in her database. What the hell is she going to do with that database? It worries me.


----------



## sags (May 15, 2010)

People applaud and cheer this in the movies......









The Girl in the Spider's Web (2018) - IMDb


The Girl in the Spider's Web: Directed by Fede Alvarez. With Claire Foy, Beau Gadsdon, Sverrir Gudnason, LaKeith Stanfield. Young computer hacker Lisbeth Salander and journalist Mikael Blomkvist find themselves caught in a web of spies, cybercriminals and corrupt government officials.




www.imdb.com


----------



## cainvest (May 1, 2013)

james4beach said:


> She was a political person and was doing this for political purposes. Really nasty stuff... Cambridge Analytica kind of stuff. I'll bet she's still stalking me today and has me in her database. What the hell is she going to do with that database? It worries me.


I don't know j4b, sounds like you got a number of unsubstantiated fears. You FB real name thing is not real, the netflix story link is bogus ... just sounds like you're building up nothing. If someone is really after you then you do have something to worry about.

I know people that have a significant social media footprint due to their work, using their real name ... nothing bad has ever happened to them. Also, almost all FB users I know use their real name, again, no evil plots have arisen.


----------



## james4beach (Nov 15, 2012)

cainvest said:


> I know people that have a significant social media footprint due to their work, using their real name ... nothing bad has ever happened to them. Also, almost all FB users I know use their real name, again, no evil plots have arisen.


I'm saying it poses a risk. Just because something bad hasn't happened to them yet, doesn't mean all is well.

It's like driving with worn out brakes. Just because nothing bad has happened to you yet, doesn't mean it's a good idea to drive around with worn out brakes.


----------



## james4beach (Nov 15, 2012)

I know that "nothing bad has happened to me" and "I have nothing to hide" are common arguments. You hear this all the time.

But remember that many people who work at these social media companies, and who work in the computer security field, tend to avoid these platforms. They value their privacy. Most of my colleagues avoided it.

You should wonder: why do people with expertise in these technologies not embrace them, and value their own privacy so much? Why the big difference in perspective than the layperson, who happily uses them? The documentary @MrBlackhill has posted about discusses this.


----------



## MrBlackhill (Jun 10, 2020)

I think the last few posts are more about identity theft than behaviour manipulation using tech.

I had my identity stolen a few years ago. I still don't know how did that happen. I found out about it because a debt collection agency was trying to reach me, they had sent a letter but there was a slight mistake in the door number. I was lucky enough to see that letter with my name on it and found it the debt collection agency wanted to get in touch with me about 750$ unpaid at some store.

It took me 6 months with the police and Equifax to close the file. They had opened 6 bank accounts and spent 750$ on one of them.

Because of that, when I changed job, my security screening didn't pass and I had to explain them I was in process to clear that out. I was also about to buy a property which I wouldn't have been able due to my credit score.


----------



## cainvest (May 1, 2013)

james4beach said:


> I'm saying it poses a risk. Just because something bad hasn't happened to them yet, doesn't mean all is well.


There is risk in most everything in life. We all have to decide which ones you need pay attention to and which ones are insignificant. I'll just leave it at that. 



james4beach said:


> It's like driving with worn out brakes. Just because nothing bad has happened to you yet, doesn't mean it's a good idea to drive around with worn out brakes.


BTW, terrible analogy. Worn brakes have a well known mechanical outcome over time for everyone while your evil cyber scare vision does not.


----------



## james4beach (Nov 15, 2012)

cainvest said:


> BTW, terrible analogy. Worn brakes have a well known mechanical outcome over time for everyone while your evil cyber scare vision does not.


I agree, that was a bad analogy.

Leaving an excessive social media footprint will often not be a problem. However in certain circumstances it can become a huge problem. If we're unlucky, these kinds of things can happen:


an oppressive regime (like China)
tight border restrictions (already the case in US, which does look at social media profiles)
legal trouble or suspicion by police
ending up on the wrong side of social trends (e.g. being Jewish, Muslim, Latino)
being associated with a hated group (e.g. environmentalist, O&G, *being wealthy*)
identity theft
This isn't an exhaustive list, but do you really think these are such rare scenarios? People already have issues crossing borders due to social media. Many countries have oppressive regimes. The US may be becoming more oppressive.

I don't think it's far fetched at all, that a person with a large social media footprint can encounter trouble.


----------



## fireseeker (Jul 24, 2017)

I think this NYT piece explains some of the things James is talking about.
Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

You may feel you have nothing to hide. You use nothing but pseudonyms online and you minimize social media use. But details of your life can easily be connected to your real identity.

What if you're gay and you regularly overnight at a lover's house?
Years later you travel to a country with significant homophobia, say Nigeria or Jamaica. One night, there's a knock on your door ...

Or what if far in the future President-for-Life Trump decides to disallow Canadians who have ever supported the NDP from entering the US. Maybe that speech you attended with the girl you were trying to seduce will come back to haunt you.

These scenarios may seem far-fetched. But I'm with James -- we don't know yet how the extensive profiling of private citizens will be used -- or misused -- in the future.


----------



## cainvest (May 1, 2013)

james4beach said:


> I agree, that was a bad analogy.
> 
> Leaving an excessive social media footprint will often not be a problem. However in certain circumstances it can become a huge problem. If we're unlucky, these kinds of things can happen:
> 
> ...


Some of those are a little better points for discussion, most are a stretch IMO. Border crossing issues have actually come up with real cases, mainly illegal things IIRC. Of course you have to provide them with access, as in, login (maybe even just apps,links,bookmarks,history,etc) on your device that they could review. Are they rare? IMO, yes but I have no source numbers to back that up. Border issues are covered with basic device knowledge and if you're not sure, wipe the device or get a new one if you're worried about the associations you have.


----------



## james4beach (Nov 15, 2012)

cainvest said:


> Some of those are a little better points for discussion, most are a stretch IMO. . . . Are they rare? IMO, yes


You're underestimating how common this is. It's not that rare! It happened to me. When I was first entering the US on a work visa, I got pulled aside. The agent took me into his office, turned his monitor sideways so we can both see it, and said he's going to start searching for me on google. He did a few searches, found various pages of mine (resume etc)... nothing too interesting.

But this was a few years ago, and the tools and search methods are much better now. Imagine a young person in their 20s who spent their whole life blabbing about everything on Facebook and Twitter. The border agent starts searching, starts quizzing you on what shows up on your pages.

Imagine a young person has Instagram or something and there are lots of party photos, people drinking, being wild. Normal stuff, but that's a problem.

Basically they do this to use your history and record as collateral to deny you entry or detain you. That's what I mean when I say that your digital footprint can be used *against* you (to harm you). This border agent wasn't searching my history to use it in my favour, was he?

And by the way, being Muslim, or Jewish, is not rare. It's very easy to end up on the "wrong side" of society's popularity contest.


----------



## cainvest (May 1, 2013)

james4beach said:


> You're underestimating how common this is. It's not that rare! It happened to me.


Didn't mean rare getting checked, I meant rare getting turned back or detained. IIRC, there was a big spike in denied entries when ottawa decided to share more info with the US border years ago.


----------



## pwm (Jan 19, 2012)

I agree with you entirely J4B. The collection of personal data by social media companies is very disturbing. The fact that you as a professional IT expert in security as well as the people who designed the systems in the first place are all in agreement that this is a serious problem, says a lot.

In my opinion however there is a greater more dangerous side of social media, which is the fact that the AI algorithms that select what you see as recommendations or search results are programmed to show you what you are most likely to already believe. This is what is leading to more social fragmentation in our society. It seems we are splintering into separate tribes based on race, gender, political persuasion and sexual orientation. We are becoming not just Canadian citizens with all the rights, privileges and responsibilities which that entails, but part of a smaller group often with grievances for past wrongs with entitlements and need for compensation. Anyone can spread ridiculous nonsense on the internet which can often be accepted as true if your search results show it to you. QAnon and Pizzagate come to mind. It's basic human nature to want to belong to and be accepted by a group but there can be serious consequences if the group think is based on false facts or complete nonsense.


----------



## Prairie Guy (Oct 30, 2018)

You mention AI algorithms without addressing the massive suppression of right of center thought. Yet the fringe and completely unbelievable Pizzagate bothers you?

I guess you were right in stating that social media just reinforces what you already believe, but you probably can't see that it applies to you.


----------



## james4beach (Nov 15, 2012)

Very good points @pwm about the algorithms creating echo chambers, brainwashing, and fragmenting society.

It's a serious problem, I agree. Facebook and Youtube seem to be particularly bad for this. To counteract this, I never use Facebook, and I don't log into Youtube. I use it anonymously and occasionally clear the cache & cookies, which resets Youtube's state.

One reason I don't use Facebook is that I can't stand the constant polarized content that shows up whenever I browse it. The content looks very nasty to me. The thing is basically a polarization machine, by design! As the documentary describes, it's because polarization and "hot button" issues engage and interest viewers.

A parallel here at CMF was the old Hot Button section. Many people like that kind of thing -- it's exciting, scary, stimulating. So from a business standpoint, you want Hot Button to keep the eyeballs glued (and Facebook loves it). Advertisers love it. Capitalists love it. At the same time, that's where we got the worst and nastiest posting, arguments, endless trolling, nut jobs, and *fuelling division*. A great illustration, I think.

A core problem is that business loves stimulating, divisive engagement. There's money in it, and you can see the same even in older media like TV (CNN, Fox News). But it is bad for society and even bad for mental health. Social media _accelerates_ these. It's amped up, with more effective engagement and more profit (Facebook, Google) and faster and more intense stimulation, polarization, outrage.

Which in turn leads to accelerated and more intense division, accelerated mental health decline, etc.


----------

