Cameras to Detect ‘Abnormal’ Behavior

In the chimerical pursuit of perfect security, Western countries are turning to advanced technology to detect and stop terrorist attacks. But these expensive schemes often fail to deliver greater safety while further eroding personal freedom, as Sander Venema observed in the Netherlands.

By Sander Venema

A few days ago I read an article about how TNO (the Dutch Organization for Applied Scientific Research, the largest research institute in the Netherlands) developed technology for smart cameras for use at Amsterdam Schiphol Airport. These cameras — installed at Schiphol airport by the Qubit Visual Intelligence, a company from The Hague — are designed to recognize certain “suspicious behavior,” such as running, waving your arms, or sweating.

Curiously enough, these are all things that are commonly found in the stressful environment that an international airport is to many people. People need to get to the gate on time, which may require running (especially if you arrived at Schiphol by train, which in the Netherlands is notoriously unreliable); they may be afraid of flying and trying to get their nerves under control; and airports are also places where friends and family meet after long times abroad, which (if you want to hug each other) requires arm waving.

Amsterdam's Schiphol Airport.

Amsterdam’s Schiphol Airport.

I suspect that a lot of false positives are going to occur with this technology due to this. It’s the wrong technology at the wrong place. I fully understand the need for airport security, and we all want a safe environment for both passengers and crew. Flights need to operate under safe conditions. What I don’t understand is the mentality that every single risk in life needs to be minimized away by government agencies and combated with technology. More technology does not equal safer airports.

Security Theatre

A lot of the measures taken at airports constitute security theatre. This means that the measures are mostly ineffective against real threats, and serve mostly for show. The problem with automatic profiling, which is what this program tries to do as well, is that it doesn’t work. Security expert Bruce Schneier has written extensively about this, and I encourage you to read his 2010 essay “Profiling Makes Us Less Safe” about the specific case of air travel security.

The first problem is that terrorists don’t fit a specific profile or they can carefully avoid “suspicious” actions. Thus, these profiling systems can be circumvented once people figure out how, and because of the over-reliance on technology instead of common sense this can actually cause more insecurity.

In the novel Little Brother, Cory Doctorow wrote about how Marcus Yallow put gravel in his shoes to fool the gait-recognizing cameras at his high school so he and his friends could sneak out to play a game outside. Similar things will be done to try and fool these “smart” cameras, but the consequences can be much greater.

We are actually more secure when we randomly select people instead of relying on a specific threat profile or behavioral profile to select who to screen and who gets through security without secondary screening. The whole point of random screening is that it’s random. Therefore, a potential terrorist cannot in advance know what the criteria are that will make the system pick him out. If a system does use specific criteria, and the security of the system depends on the criteria themselves being secret, that would mean that someone would just have to observe the system for long enough to find out what the criteria are.

Technology may fail, which is something people don’t always realize. Another TNO report entitled: “Afwijkend Gedrag” (Abnormal Behavior) states under the (admittedly tiny) section that deals with privacy concerns that collecting data about abnormal behavior of people is ethically just because the society as a whole can be made safer with this data and associated technology. It also states (and this is an argument I’ve read elsewhere as well), that “society has chosen that safety and security trumps privacy.”

Now, let’s say for the sake of the argument that this might be true in a general sense (although it can be debated whether this is always the case, personally I don’t think so, as sometimes the costs are just too high and we need to keep a free and democratic society after all). The problem here is that the way technology and security systems are implemented is usually not something we as a society get to first have a vote on before the (no doubt highly lucrative) contracts get signed.

In the Dutch airport case, Qubit probably saw a way to make a quick buck by talking the Schiphol leadership and/or the government (as the Dutch state holds 69.77 percent of the Schiphol shares) into buying their technology. It’s not something the people had a conscious debate on, and then subsequently made a well-informed decision.

Major Privacy Issues

We have established that these systems are ineffective and can be circumvented (like any system can), and won’t improve overall security. But much more importantly, there are major privacy issues with this technology. What Schiphol and Qubit are doing here is analyzing and storing data on millions of passengers, the overwhelmingly vast majority of whom are completely innocent. This is like shooting a mosquito with a bazooka.

What happens with this data? We don’t know, and we have to believe Qubit and Schiphol on their word that data about non-suspect members of the public gets deleted. However, in light of recent events where it seems convenient to collect and store as much data about people as possible, I highly doubt any deletions will actually happen.

And the sad thing is: in the Netherlands the Ministry of Security and Justice is now talking about implementing the above-mentioned behavioral analysis system at another (secret) location in the Netherlands. Are we all human guinea pigs ready to be tested and played around with?

What Is Abnormal?

There are also problems with the definitions. This is something I see again and again with privacy-infringing projects like this. What constitutes “abnormal behavior”? Who gets to decide on that and who controls what is abnormal behavior and what isn’t?

Maybe, in the not-too-distant future, the meaning of the word “abnormal” begins to shift, and begins to mean “not like us,” for some definition of “us.” George Orwell mentioned this effect in his book Nineteen-Eighty-Four, where ubiquitous telescreens watch and analyze your every move and one can never be sure what are criminal thoughts and what aren’t.

In 2009, when the European research project INDECT got funded by the European Union, there were critical questions asked to the European Commission by the European Parliament. More precisely, this was asked:

Question from EP: How does the Commission define the term abnormal behavior used in the program?

Answer from EC: As to the precise questions, the Commission would like to clarify that the term behavior or abnormal behavior is not defined by the Commission. It is up to applying consortia to do so when submitting a proposal, where each of the different projects aims at improving the operational efficiency of law enforcement services, by providing novel technical assistance. [Source: Europarl (Written questions by Alexander Alvaro (ALDE) to the Commission)]

In other words: according to the European Commission it depends on the individual projects, which all happen to be vague about their exact definitions. And when you don’t pin down definitions like this (and anchor them in law so that powerful governments and corporations that oversee these systems can be held to account!), these can be changed over time when a new leadership comes to power, either within the corporation in control over the technology, or within government.

This is a danger that is often overlooked. There is no guarantee that we will always live in a democratic and free society, and the best defense against abuse of power is to make sure that those in power have as little data about you as possible.

Keeping these definitions vague is a major tactic in scaring people into submission. This has the inherent danger of legislative mission creep. A measure that once was implemented for one specific purpose soon gets used for another if the opportunity presents itself.

Once it is observed that people are getting arrested for seemingly innocent things, many people (sub)consciously adjust their own behavior. It works similarly with free speech: once certain opinions and utterances are deemed against the law, and are acted upon by law enforcement, many people start thinking twice about what they say and write. They start to self-censor, and this erodes people’s freedom to the point where we slowly shift into a technocratic Orwellian nightmare. And when we wake up it will already be too late to turn the tide.

Sander Venema is an experienced Web-developer and programmer who is concerned about the fast-emerging surveillance states which undermine civil liberties and human rights. He is also the founder of Asteroid Interactive, a Web design and development company based in The Netherlands. [A version of this story was originally posted at http://sandervenema.ch/]

7 comments for “Cameras to Detect ‘Abnormal’ Behavior

  1. Carl Stoll
    September 20, 2014 at 19:39

    Profiling people (as opposed to profiling behaviour) is useless for detecting decentralized crime, like purse-snatching, bank-robbing, mugging, etc. On the other hand, in the case of network crime (organized crime, terrorism, espionage, etc.) profiling is recommended by top criminologists, although it is an unpopular opinion and few say so openly. However, if you read specialised literature you get the message very quickly. In his book “Understanding Terror Networks”, the former CIA agent Marc Sageman says as much.
    This is the finding of an article published in the Journal of Public Economic Theory:
    “The most effective law enforcement policy imposes only moderate restrictions on the officer’s ability to profile. In contrast to models of decentralized crime, requiring equal treatment never improves the effectiveness of law enforcement.”
    Profiling, Screening and Criminal Recruitment, by Christopher Cotton & Cheng Li, Journal of Public Economic Theory,
    http://onlinelibrary.wiley.com/doi/10.1111/jpet.12115/abstract

  2. F. G. Sanford
    September 18, 2014 at 19:12

    Joe T. and Steve both make excellent points. I spent nearly thirty years working in a field in which I had to endure government programs which sought to computerize tasks that worked better without the computerization. There were many excuses and rationalizations for these programs, but among the most ridiculous was the notion that computerized data provided “real time” information. My point always was, “My office has a telephone and a door. You can call me or knock on my door. Anything else will get you obsolete information”. The only thing really achieved was to create a surveillance tool which monitored the personnel using it. It never enhanced the task they were trying to accomplish. But it SURE DID pass a lot of tax payer money to I.T. corporations.

    Facial recognition and behavior identification look to me like the things they now call “quack medical devices”. All those electromagnetic and short wave diathermy devices, the fluorescent high voltage luminescent tubes that glowed green and blue and purple, x-ray machines to treat acne and fluoroscopes to diagnose the designer diseases of the day – most of them proved totally useless, but some of the radiation devices caused massive disfiguration, scarring, debilitation and cancer. But the “technology” had been discovered, and by God, they were going to create a market for it.

    The myriad of “bin Laden” pictures and videos would seem to negate any reliability for “facial recognition”. He never looks the same twice. But then again, Sir Laurence Olivier never looked the same in any of his films, either. Some claim he was fond of modifying his features with flesh tone wax. There will inevitably be ‘hacks’ for modern technology. I wonder how many Congressmen and Senators will be caught sneaking off for clandestine interludes with insignificant others because they exhibit “suspicious behavior” in an airport – obviously, NSA and CIA would have to ensure that there is an algorithm to provide a cyberspace “free pass” – otherwise, imagine the problems Lindsey Graham and John McCain could create!

    • Joe Tedesky
      September 18, 2014 at 19:55

      F.G. Everything you mentioned there, between the recognition tech stuff, to the wax, is a money maker. That’s what Homeland Security is all about. This is where that ‘Bin Laden Doctrine’ thing is performing its best. You brought this up in another post.

      I have a better idea. Let’s hire a couple Jersey Doormen to sit at each gate entrance. This isn’t hard security to figure out, but we’ll have none of that cheap greasy stuff, right?

      We should all ask ourselves, that after 13 years of this crap…wars that aren’t won, waiting in long lines at airports to be inspected…INSPECTED! Are you kidding me. Their always changing the rules, and not telling the players. Think of how many times you have been told, that since 9/11 we need you to fill this form out. Oh hell, you know what I’m talking about.
      Joe Tedesky

  3. JB Smith
    September 18, 2014 at 16:25

    The brain initiative and Obama Care are the great deceptions and two of the worst deceptions perpetrated on the citizens of the United States of America. State and Local law enforcement are implanting innocent citizens with a biochip. According to “A Note on Uberveillance” by MG & Katina Michael, it’s “like big brother on the inside looking out.” “Safeguards in a World of Ambient Intelligence” by Springer page 9 states, “law enforcement would have us believe that we can only be safe as long as they know where we are at all times, what we’re doing and what we are ‘thinking'”. It is such an invasion of privacy. They use LRAD aka the active denial system to make you think you are hearing voices – it has technology like the audio spotlight and attempt to put you into a state of what the NIJ calls “excited delirium”. Next they use Psyops like stalking, drugs, kidnapping, whatever they can to either 1) put you in a “crisis stabilization ward” (to take away your 2nd Amendment rights) or 2) a prison or 3) give you an infectious disease. It is a plan for law enforcement to confiscate all guns so they can torture you without fear for their lives. They are targeting female Christians and military veterans. Go to Rutherford Institute and check out Brandon Raub. His lawyers uncovered the plot against our veterans and made it public in court papers. They take you to crisis stabilization wards even if you’re not a danger to yourself or others and show no sign of mental illness. Why? Because there is a Supreme Court Case by Justice Cardoza – Schloendorff v. Society of New York Hospital,105 N.E. 92 (1914 ) that says anyone of sound mind and not a criminal has a right to say what goes in their body or on their body. They want to ensure there is no way you can get this off. No one in America wants this level of privacy invasion. In addition, it allows law enforcement the opportunity to torture you at will with sleep deprivation, heart attacks and other pains. Who wants to give power hungry cops control over our central nervous systems? The active denial system can murder without leaving a mark! The Bio Initiative Report with 2014 additions details all the cancers, diseases and disabilities it causes. Hence, the need for Obama Care and to open all American’s medical records for law enforcement to altar. The Joint Non-lethal Weapons Directorate used untested military applications on me! The Virginia state police have claimed responsibility for the psyops that caused the Virginia Tech Massacre. The JNLWD’s 2010 Psyops boasts they can create suicide victims and mass murderers.

    • F. G. Sanford
      September 19, 2014 at 15:09

      JB, Better check your smoke detector – I bet the battery is low.

  4. Joe Tedesky
    September 18, 2014 at 14:49

    Whether he is in heaven with his 72 virgins, or suffering in the fires of hell, Osama Bin Laden has to be smiling when he realizes how much the US has spend fighting terrorism since 9/11. Every hour the US spends 8.43 million dollars on Homeland Security. That’s just Homeland Security. No, I’m not sure if that includes TSA…go look it up. Remember this, Bin Laden who was accredited with driving Russia bankrupt with their Afghanistan war, believed he could do the same to America. Well, did we fall for his trap? You tell me.

    https://www.nationalpriorities.org/cost-of/homeland-security-since-911/081114/

    Imagine if we spend that much on alternative energy, needed infrastructure, or health care.

  5. Steve Naidamast
    September 18, 2014 at 13:45

    I didn’t even read the entire essay to agree with Sander’s description of the stupidity (for want of a better word) in the attempt to have computerized processes monitor every single possible risk-potential on the Human body. It is simply not possible though it makes for great PR in certain business circles and is sure to garner additional revenues for those promoting such technologies.

    As a senior software engineer for many years I have found it quite amazing as to how my profession is viewed by the lay-person to it. Most believe that we do some form of “magic” when in fact it is simply the use of pure logic. However, that is only at the level of quality technical professionals.

    Management of software development, at least in the United States, is often run by complete incompetents who are no less self-aggrandizing than any other type of business manager. Worse, they know very little about software development or how to manage it forcing sub-quality into the products their teams produce.

    So even at the technical manager level, professional developers and engineers run into the complete illogical methodology of working in such environments whereby we must develop software against constantly changing specifications as result of management attempting to placate and coddle their user communities.

    For those who are interested in this paradox there is an excellent article entitled “Why Programming Sucks” at http://stilldrinking.org/programming-sucks.

    Though very accurate in its portrayal of my profession it is also quite frightening as once a reader understands what is being stated in the essay they will come to realize the utter foolishness in the technologies that Sander has described in the essay here…

Comments are closed.