CWI#83 - Ethical retrospective on the FB Cambridge Analytica's case

You're probably fed up by now about the whistleblower and the psychological warfare tool, although...
CWI#83 - Ethical retrospective on the FB Cambridge Analytica's case
By Dries Bultynck • Issue #83 • View online
You’re probably fed up by now about the whistleblower and the psychological warfare tool, although… I personally I’ fascinated by the naïve realism of people to this data-breach. 

Let’s keep it simple, shall we? I’m nog going to suggest anything. Not going to rant. Just listing up some things that passed my radar. While everybody is pointing fingers and everyone is only going after Facebook for a loophole in the API and followed terms and compliances at that time, according to FB & US privacy policies, which is still an op-out environment at this time. 
  • There was a research app by dr. Kogan that made it possible to get limited network profile data per user using that app.
  • 270.000 profiles were affected
  • 50 million profiles of limited data was harvested
  • PII public data was linked by CA to FB profiles to close the loop of the vote (see video #3 with Molly Schweickert)
  • This was no hacking
  • Humans are the weakest link in cybersecurity
  • Privacy is confused with security
  • The users should have the right to know, and potentially pursue legal action against Facebook and other involved parties!
  • Trust of data deletion is an issue!
  • The claim that this is a data breach is completely false? Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.
  • Politics (and other powerful people) should be banned from using this kind of campaigning?
  • Is the information of PII so important on an individual level?
  • Do we really have a clue how much effort & money goes into this? This ain’t a 1, 2, 3 click approach kinda software :)
This is starting to get really confusing. So Cambridge Analytica didn’t really cover up they’re way of working or the use of PII data. Actually, they (Alexander Nix - CEO at CA &  Molly Schweickert - head of Digital at CA) revealed all the details and its all on youtube, you know. So I don’t get the fuzz about “using the data” (from an analytical perspective) as Vice puts it out, yet… I do get the legal issues of it. By the way, don’t forget that those sites have an advertising model based on pageviews.
“We did all the research,” Cambridge Analytica CEO Alexander Nix says, unaware that he’s being secretly filmed. “All the data, all the analytics, all the targeting. We ran all the digital campaign, the television campaign, and our data informed all the strategy.”
These are the videos on YT - with all the details of the campaigning. Sry to say, but this is clever shit. Respectively already published in september 2016, march 2017 and may 2017.
The Power of Big Data and Psychographics - YouTube
The Power of Big Data and Psychographics - YouTube
Alexander Nix, CEO, Cambridge Analytica - Online Marketing Rockstars Keynote | OMR17 - YouTube
Alexander Nix, CEO, Cambridge Analytica - Online Marketing Rockstars Keynote | OMR17 - YouTube
Cambridge Analytica explains how the Trump campaign worked - YouTube
Cambridge Analytica explains how the Trump campaign worked - YouTube
But here’s the kicker, Obama did the same thing in 2012. People forgot about this? 
In 2012, the Obama campaign encouraged supporters to download an Obama 2012 Facebook app that, when activated, let the campaign collect Facebook data both on users and their friends.
According to a July 2012 MIT Technology Review article, when you installed the app, “it said it would grab information about my friends: their birth dates, locations, and ‘likes.’ ”
The campaign boasted that more than a million people downloaded the app, which, given an average friend-list size of 190, means that as many as 190 million had at least some of their Facebook data vacuumed up by the Obama campaign — without their knowledge or consent.
If anything, Facebook made it easy for Obama to do so. A former campaign director, Carol Davidsen, tweeted that “Facebook was surprised we were able to suck out the whole social graph, but they didn’t stop us once they realized that was what we were doing.”
They used the same exact technique; called Psychographics. 
So this is really a privacy issue, lets not be confused about this? People are OK with these techniques? With these kind of marketing approaches? 
THIS VERY VERY INTERESTING!!!! RIGHT???? Or do we really don’t care that much???? Is it really about our data? Or the way they 'protect’ our data. A violation of trust instead of a fear of using it for specific purposes. 
The value of data is only in the action, right?
T. Baekdal makes an excellent point. This video confirms actually what the problem really is. The data-breach is one part but how it was used legally, not from a data science perspective (you can use the same techniques on other data), is a big issue.
Whenever there is big power (and thus cash) involved, ethical boundaries will be broken
We are Missing the Point about Facebook, Cambridge Analytica, etc. - Baekdal Plus
Marketing lessons
I think this sums it up very nicely. Link building in its purest essence: creation of word of mouth. Oh yes… do they get distribution!
“We just put information into the bloodstream to the internet and then watch it grow, give it a little push every now and again over time to watch it take shape. And so this stuff infiltrates the online community and expands but with no branding—so it’s unattributable, untrackable.”
We all know we constantly are being monitored online. Our Gmail account, our FB account, what we do on every website, etc. As an individual, yet not fully matched with your personal (hey, it’s you, Dries) information
Here’s my question to you: 
To what extent are you willing to give your personal information, deliberately, in order to receive or create added value?
This intrigues me. Is it really an ethical question? Or is it a personal question we should ask ourselves. The system can protect us, partly, but it can’t protect us from people who know how to turn intelligence in an asset.
And yet… we want this service, for free. 
Ain’t gonna  happen. Is mankind really that stupid?
Check the paragraph under the title. Sigh.
Interesting reads
Why I Took Legal Action Against Cambridge Analytica - Motherboard
Exclusive: Alphabet X is exploring new ways to use AI in food production - MIT Technology Review
"Eigenlijk zijn veel Belgische bedrijven digitale pubers die nog een pak werk hebben om een volwassen datastrategie te ontwikkelen"
Squishies, the next big toy trend, are adorable creatures that satisfy the human longing for touch — Quartz
Setting the record straight: is streaming greener than vinyl?
That’s it for this week
d - out.
Did you enjoy this issue?
Dries Bultynck
By Dries Bultynck

Remarkable reads, spotted momentum & behavioral patterns in media, digital, retail, economics, health, climate, etc.
Often hollistic, sometimes very specific.

I'm Dries & the internet is the best thing that ever happened to me. more about me here:

If you don't want these updates anymore, please unsubscribe here.
Powered by Revue