Category Archives: citizenship

Facebook Is About To Make Catfishing Problems Even Worse

Scam Computer Keys Showing Swindles And FraudImage via Careful Parents

Over the past week, I’ve had a number of people share articles with me related to Facebook’s testing of a new feature that is purported to alert Facebook users when it finds that someone is impersonating your account. Once the user is alerted, that user is then able to report the fraudulent account and pray that Facebook will take it down. However, given my 8 years of experience with this problem, I feel that I am qualified to say that this approach will simply not work for a number of reasons.

  1. Facebook often fails to take down fraudulent profiles: While I have successfully had Facebook take down hundreds of fake profiles (I find several new ones each day), there are certain profiles that it simply does not take down. For instance, I’ve been trying to get Facebook to take down the account of “Trofimov Sergei” (a user who is clearly using a profile photo of me and my son) for over a year now. Yet, no matter how many times I report the account, the profile remains. More disturbing is the fact that if you search for “Trofimov Sergei” on Facebook, you will see dozens of fake accounts by the same name using stolen photos of other men. Most of the deception is done in private communication with the (potential) victims, but every once in a while, you will find a public post where the fraudsters are asking for money for a feigned illness. Luckily, there are many people (often former victims) who do uncover and share their knowledge of these fraudulent accounts in order to contain some of the damage.
  2. Scammers may use photos of your children as their profile photo: After hundreds of reports, Facebook still refuses to take down the account of “Nelson Colbert,” a scammer who is using photos of my children as a profile photo. When you report an impersonation in Facebook’s current reporting tool, you ultimately have to choose one of the following: A) “This timeline is pretending to be me or someone that I know”, or B) “This timeline is using a fake name.” I have been completely unsuccessful when using Option B, and I have had only limited success with Option A: when you choose this option, you are asked to identify the user who is being impersonated, but when I identify myself, Facebook quickly rejects the report as it is clear that I am not the person in the profile photo. I have attempted to use Facebook’s “Report An Underage Child” tool (which is only available in Canada after you logout, apparently), but this has also been completely unsuccessful. The most unnerving part of this particular profile is that I receive more reports about it from victims than I do about any other. In fact, there are literally dozens of pages of search results that relate to “Nelson Colbert” and this scammer’s involvement in fraudulent activities. Yet, it appears that Facebook has made this account untouchable. I suspect that the scammer behind it may have created falsified documentation to get the account validated internally.
  3. Scammers may use your elderly mother’s photo as their profile picture: These criminals often create sophisticated networks of friends and family in their schemes. For instance, the scammers created a fake profile using my mother’s photos and named her Maria Gallart. I cannot report this profile directly to Facebook; instead I am only able to report it to my mother to deal with it. I did so, and as you would imagine, the distress, anxiety, and uncertainty that this caused my nearly 80-year-old mother was not something that she needed nor something that she necessarily knew how to deal with. And even with my assistance, reporting the fraudulent account from my mother’s account (many times) has not led to the account being taken down.
  4. Facebook doesn’t always believe the “real” person in cases of identity fraud: Facebook has taken down my account twice because a scammer reported me as being the fake Alec Couros. In both cases, I had to submit my passport to Facebook via email for verification (which is incredibly problematic for security reasons). I am unsure of why I had to do this twice, and I am puzzled as to why my account wasn’t verified either time (even though I have applied for verified status). Facebook’s proposed system will have to rely on verifying an account using a secure, consistent, and foolproof system if it is to be successful. To date, the company has failed miserably in this respect.
  5. Facebook’s proposed system could give an advantage to the criminals: Fraudsters have often used photos of me that I have never previously used on Facebook. Based on the incomplete details provided so far about this new alert system, one might assume that if I were to use any of my personal photos after a scammer had done so, I would be the one flagged as an impersonator. Thus, the criminal might easily be regarded as having the authentic profile, which sounds like really bad news.

The Mashable article shared at the beginning of this post states that Facebook is rolling out these features as the company attempts to push its presence into regions of the world where “[impersonation] may have certain cultural or social ramifications” and “as part of ongoing efforts to make women around the world feel more safe using Facebook.” If that is the goal, Facebook’s proposed technology won’t help, and it may very well make things worse for women (or anyone) using the site. Already, Facebook is plagued with identity thieves who adversely affect the safety, comfort, and freedom of many of its users, and the problem will only continue to grow with these types of half-baked efforts. You may not be affected now, but unless Facebook does something to fully address this issue, you almost certainly will be.

(Digital) Identity in a World that No Longer Forgets

[This post was written jointly with Katia Hildebrandt and also appears on her blog.]

In recent weeks, the topic of digital identity has been at the forefront of our minds. With election campaigns running in both Canada and the United States, we see candidate after candidate’s social media presence being picked apart, with past transgressions dragged into the spotlight for the purposes of public judgement and shaming. The rise of cybervigilantism has led to a rebirth of mob justice: what began with individual situations like the shaming of Justine Sacco has snowballed into entire sites intended to publicize bad online behaviour with the aim of getting people fired. Meanwhile, as the school year kicks into high gear, we are seeing evidence of the growing focus on digital identity among young people, including requests for our interning pre-service teachers to teach lessons about digital citizenship.


All this focus on digital identity raises big questions around the societal expectations about digital identity (i.e. that it’s sanitized and mistake-free) and the strategies that are typically used to meet those expectations. When talking to young people about digital identity, a typical approach is to discuss the importance of deleting negative artefacts and replacing them with a trail of positive artefacts that will outweigh these seemingly inevitable liabilities. Thus, digital identity has, in effect, become about gaming search results by flooding the Internet with the desired, palatable “self” so that this performance of identity overtakes all of the others.

But our current strategies for dealing with the idea of digital identity are far from ideal. From a purely practical perspective, it is basically impossible to erase all “negatives” from a digital footprint: the Internet has the memory of an elephant, in a sense, with cached pages, offline archives, and non-compliant international service providers. What’s more, anyone with Internet access can contribute (positively or negatively) to the story that is told about someone online (and while Europe has successfully lobbied Google for the “right to be forgotten” and to have certain results hidden in search, that system only scratches the surface of the larger problem and initiates other troubling matters). In most instances, our digital footprints remain in the control of our greater society, and particularly large corporations, to be (re)interpreted, (re)appropriated, and potentially misused by any personal or public interest.

And beyond the practical, there are ethical and philosophical concerns as well. For one thing, if we feel the need to perform a “perfect” identity, we risk silencing non-dominant ideas. A pre-service teacher might be hesitant to discuss “touchy” subjects like racism online, fearing future repercussions from principals or parents. A depressed teenager might fear that discussing her mental health will make her seem weak or “crazy” to potential friends or teachers or employers and thus not get the support she needs. If we become mired in the collapsed context of the Internet and worry that our every digital act might someday be scrutinized by someone, somewhere, the scope of what we can “safely” discuss online is incredibly narrow and limited to the mainstream and inoffensive.

And this view of digital identity also has implications for who is able to say what online. If mistakes are potentially so costly, we must consider who has the power and privilege to take the risk of speaking out against the status quo, and how this might contribute to the further marginalization and silencing of non-dominant groups.

In a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness

Our current strategy for dealing with digital identity isn’t working. And while we might in the future have new laws addressing some of these digital complexities (for instance, new laws are currently being proposed around issues of digital legacy) such solutions will never be perfect, and legislative changes are slow. Perhaps, instead, we might accept that the Internet has changed our world in fundamental ways and recognize that our societal mindset around digital missteps must be adjusted in light of this new reality: perhaps, in a world where forgetting is no longer possible, we might instead work towards greater empathy and forgiveness, emphasizing the need for informed judgment rather than snap decisions.

So what might that look like? The transition to a more forgiving (digital) world will no doubt be a slow one, but one important step is making an effort to critically examine digital artefacts before rendering judgment. Below, we list some key points to consider when evaluating problematic posts or other content.

Context/audience matters: We often use the “Grandma rule” as a test for appropriateness, but given the collapsed context of the online world, it may not be possible to participate fully in digital spaces if we adhere to this test. We should ask: What is the (digital) context and intended audience for which the artefact has been shared? For instance, was it originally posted on a work-related platform? Dating site? Forum? News article? Social network? Was the communication appropriate for the platform in which it was originally posted?

Intent matters: We should be cognizant of the replicability of digital artefacts, but we should also be sure to consider intent. We should ask: Was the artefact originally shared privately or anonymously? Was the artefact intended for sharing in the first place? How did the artefact come to be shared widely? Was the artefact made public through illegal or unethical means?

History matters: In face to face settings we typically don’t unfriend somebody based on one off-colour remark; rather we judge character based on a lifetime of interactions. We should apply the same rules when assessing a digital footprint: Does the artefact appear to be a one time thing, or is it part of a longer pattern of problematic content/behaviour? Has there been a sincere apology, and is there evidence that the person has learned from the incident? How would we react to the incident in person? Would we forever shame the person or would we resolve the matter through dialogue?

Authorship matters: Generations of children and teenagers have had the luxury of having their childhoods captured only by the occasional photograph, and legal systems are generally set up to expunge most juvenile records. Even this Teenage Bill of Rights from 1945 includes the “right to make mistakes” and the “right to let childhood be forgotten.” We should ask: When was the artefact posted? Are we digging up posts that were made by a child or teenager, or is this a recent event? What level of maturity and professionalism should we have expected from the author at the time of posting?

Empathy matters: Finally, we should remember to exercise empathy and understanding when dealing with digital missteps. We should ask: Does our reaction to the artefact pass the hypocrite test? Have we made similar or equally serious mistakes ourselves but been lucky enough to have them vanish into the (offline) ether? How would we wish our sons, daughters, relatives, or friends to be treated if they made the same mistake? Are the potential consequences of our (collective) reaction reasonable given the size and scope of the incident?

This type of critical examination of online artefacts, taking into consideration intent, context, and circumstance, should certainly be taught and practiced in schools, but it should also be a foundational element of active, critical citizenship as we choose candidates, hire employees, and enter into relationships. As digital worlds signal an end to forgetting, we must decide as a society how we will grapple with digital identities that are formed throughout the lifelong process of maturation and becoming. If we can no longer simply “forgive and forget,” how might we collectively develop a greater sense of digital empathy and understanding?

So what do you think? What key questions might you add to our list? What challenges and opportunities might this emerging framework provide for digital citizenship in schools and in our greater society? We’d love to hear your thoughts.