actualiteitsforums  

Ga Terug   actualiteitsforums > ACTUALITEITSFORUM > Pro of Contra?
Gebruikersnaam
Wachtwoord
Home FORUMS Registreer Arcade Zoeken Posts van vandaag Markeer Forums als Gelezen

Antwoord
 
Onderwerp Opties Zoek in onderwerp Waardeer Onderwerp Weergave Modus
  #1  
Oud 8th December 2014, 22:56
T*ssa.Goetschalckx T*ssa.Goetschalckx is offline
Registered User
 
Geregistreerd op: Sep 2013
Locatie: Herentals
Posts: 112
Spotting terrorist behaviour online is harder than finding child ​abuse images

Spotting terrorist behaviour online is harder than finding child ​abuse images
The ISC suggests internet companies can detect terrorist communications in the same way as search engines find child abuse images. But these are very different technical undertakings

Why are we outraged by the suggestion that Facebook users’ messages should be screened for potential terrorist threats yet we accept that airline passengers are screened for terrorist threats before boarding a plane? What’s the difference between moving people or information around the world? This is the question raised by the UK parliament’s intelligence and security committee when it suggests Facebook and other internet platforms should “take responsibility” for detecting terrorist activity online.

There are a number of reasons why requiring Facebook and other websites to become partners in state surveillance threatens free expression and privacy, but before considering this radical step, let us examine whether it makes technical sense.

We might like to believe that internet powerhouses possess the technological wizardry to pinpoint terrorist behaviour hidden in the hundreds of millions of messages generated each hour with the same accuracy of an airport metal detector which can spot a revolver in a traveller’s pocket. Implicit in the ISC report is the suggestion that these Silicon Valley geniuses could make the world a safer place but just refuse to do so. But the reality is more complex.

The committee suggests online services can spot terrorist behaviour in much the same way as ISPs and search engines currently detect and remove child abuse images. Yet these are very different technical undertakings. Most child abuse images are detected by computer programmes designed to notice patterns recurring from one image to another. Law enforcement experts identify an initial set of illegal images and the pattern recognition software flags online picture files that are similar.

YouTube uses a similar approach to identifying videos that may infringe copyright. Copyright holders upload samples of their works and then a very clever YouTube system flags any videos on the site that contain similar images or audio. These techniques all depend on being able to train systems to know what kind of material to look for. If you give these systems enough examples of what they are looking for and provide feedback on their success, they tend to work pretty well.
Advertisement

Finding terrorist communications is much harder that finding copyright infringing videos or child abuse images. First, there just aren’t that many terrorists in the world (luckily) so there is little data with which to train automated alerts. More importantly, according to the ISC report, terrorist behaviour is adaptive. A video rip-off of a copyrighted movie can’t change is characteristics to avoid detection. Nor can a child abuse image morph into something else. However, terrorists know they are being watched so take steps to avoid detection.

In fact, the ISC report is a compelling explication of just how hard it is for expert human investigators to know how to interpret behaviour of potential terrorists. The perpetrators of the terrible murder of Fusilier Lee Rigby were under regular surveillance by British authorities, but they were still not able to pinpoint threats. Similarly, while automated terrorist alerts sound appealing, it is hard to design systems that can discern intent, so any such approach would be subject to risk of misidentification.

Of course, the internet ought not be a free-fire zone for terrorists and criminals. So just what kind of help ought websites offer in law enforcement and intelligence investigations? Rather than expect platforms to proactively identify terrorist needles in the giant online haystack, they ought to respond to reasonable and judicially supervised law enforcement requests for information about specific individual suspects. That way courts can assess the reliability of data being sought and provide appropriate protections for individuals. Asking websites to monitor and remove speech from their services on their own poses a grave risk to freedom of expression.

Despite the protestations of the committee, US internet platforms do actually respond to UK surveillance requests, returning data roughly three-quarters of the time, only a slightly lower rate than US requests, according to transparency reports produced by the companies. I’m not troubled by the difference. As a user I want the websites I use to scrutinise these requests carefully and resist the ones that seem beyond the bounds of the law. Different countries have varying degrees of privacy protection in their surveillance laws and I want internet companies to stand up for their users’ rights.

Online surveillance will continue to be a toxic issue until we have a reset in the relationship between governments (both law enforcement and intelligence agencies) and the online community (both providers and users). This reset requires substantive agreement on human rights norms for global surveillance, and some real accountability mechanism that users can trust.

Computer scientists in my lab and around the world are designing a new class of accountable systems that can help restore trust. These systems enable both governments and internet platforms to provide transparent proof to the world that they are actually following the rules as required by law.

Restoring trust online begins with a binding commitment to global human rights principles by those who would conduct surveillance, and then legal and technical systems that assure those rules are being followed.

Daniel J Weitzner is principal research scientist at the MIT Computer Science and Artificial Intelligence Lab, former White House deputy chief technology officer for internet policy, and co-founder of TrustLayers, a new accountable systems company

Bron: http://www.theguardian.com/technolo...al-undertakings
4 december 2014

Commentaar: Ik vind het een goed idee om met internetfilters mogelijke terroristen op te pakken en aanslagen te voorkomen. Bij deze aanslagen sterven telkens vele onschuldige mensen. Het is niet meer dan normaal dat men op deze manier ook pedofielen probeert te identificeren en kinderen van misbruik te redden.
Ik begrijp dat het moeilijk is om goede zoekcriteria te definiëren, als men met duistere praktijken bezig is, gebruikt men vast en zeker codetaal. Tot nu toe hebben deze onderzoekers nog niet veel goede criteria kunnen opsommen om in de automatische zoekmachines in te kunnen voeren. Dat is jammer. Vandaag zijn internet en technologie al ver gevorderd. Ik heb geen idee hoe ik dit zelf zou moeten klaarspelen, maar ik vind dat ze nog niet ver genoeg staan. Terrorisme is, spijtig genoeg, iets waar we al veel over gehoord hebben in het nieuws, het komt veel voor. Als er een hulpmiddel voorhanden is om terrorisme te vermijden en te bestrijden, vind ik dat we dit zo goed mogelijk moeten ontwikkelen en moeten gebruiken om die onschuldige mensen te redden.
Met citaat antwoorden
Antwoord


Onderwerp Opties Zoek in onderwerp
Zoek in onderwerp:

Uitgebreid Zoeken
Weergave Modus Stem op dit onderwerp:
Stem op dit onderwerp::

Posting Regels
Je mag niet nieuwe onderwerpen maken
Je mag niet reageren op posts
Je mag niet bijlagen posten
Je mag niet jouw posts bewerken

vB code is Aan
Smilies zijn Aan
[IMG] code is Aan
HTML code is Uit
Forumsprong



Alle tijden zijn GMT +2. De tijd is nu 05:44.


Powered by: vBulletin Version 3.0.6
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.