Fake News on the Internet - Nature, Dangers and Troubleshooting

Written by Ioannis Ntokos

What is fake news?

Fake news is not a new phenomenon. According to Wikipedia, fake news is a form of gutter press or propaganda that consists of deliberate misinformation or farce propagating through the traditional press, media transmission or online social media. ” We notice, therefore, that fake news can be spread through a variety of communication methods. It is worth mentioning, for example, the spread of a rumor within closed, or even more widespread, social clusters over the last century (the so-called gossip, which was often based on rumors rather than on reality).

The main feature that has changed over fake news in the 21st century is the method by which they are spread. Beyond the “traditional” type, which is still used to create propaganda, modern media, mainly based on the use of the Internet for their operation, have been added to the list of ways in which untrue news appear to the average user. Newspapers, magazines and news agencies (especially the largest in scope) have acquired their own website, online channel, and electronic ‘forms’ to exploit the widespread growth of the internet. Access to these media has become very easy on the internet, with the latter being a source of information for millions of people around the world. According to a survey by Reuters Institute (2016), most residents in 26 countries surveyed are now more reliant on social media rather than the press to get informed.

Reference value at this point is the impressive but also worrying (as we shall see below) Chinese avant-garde in news broadcasting. The Xinhua Chinese press has designed and created the first news anchormen entirely based on Artificial Intelligence. This is undoubtedly an impressive achievement, but the risks of misinformation and fake news remain.

What are the implications and risks of spreading fake news over the Internet?

The first of the most damaging effects of fake news is of a legal nature, and it has to do with possible violations of rights to information and expression. These rights are enshrined in the European Convention on Human Rights, ECHR (Article 10), and constitutionally guaranteed in Greece under Articles 5a and 14 respectively. Based on these provisions, every Greek citizen must be able to be informed and express himself/herself, without restrictions (unless exceptions are allowed). The impact of fake news, in legal terms, at first glance, seems to be very important: this news misinforms the citizen, with the consequent violation of his/her fundamental rights. At first sight, therefore, the dissemination of such news is constitutionally forbidden in Greece. This prohibition applies to news broadcasted by any means, including the Internet. On a more practical level, fake news found in online environments can be harmful, given the way they become known and accessible to the general public. The means by which this news is transmitted (i.e. the Internet) helps: a) the ease of writing / authoring; b) the ease of their transmission (in which the recipient of such news can now play an active role, their distribution to more people through social networking platforms); and c) the difficulty of identifying the source of the news given the vast amount of information available on the internet.

What do all the above mean? Quite simply, the ease with which such news can reach an Internet user, coupled with the overwhelming – already available and new – information, makes their dissemination extremely easy. This ease grows sharply with the help of social media, offering an extremely effective channel of communication of such news with their ultimate recipient. Because of their ease of creation, they can be extremely persuasive and plausible. At the same time, this news is difficult to crosscheck and confirm because of the already large amount of information available on the Internet and the difficulty of filtering them from the average user of online media.

Undoubtedly, the most damaging effect of fake news is the spreading of their (untrue) content. The person targeted by such publications, news and content generally faces the dangers arising from the inducements of those who disseminate them. Every kind of purpose is served by the dissemination of such news. Indicatively, these may be political, economic, social, humanitarian or terroristic. The influence of the recipient may lead to subsequent manipulation, terror, prejudice and marginalization. Similarly, especially when false news refers to individuals or organizations, they can cause non-material damage to them, in the form of slander, prejudice, hate, and positive feelings, without relying on true facts.

Methods of dealing with the phenomenon

In conclusion, misleading news on the Internet, having many points in common with those used in more “disconnected” environments, is even more disastrous. How can we therefore get protected against dubious validity news? The most effective tool for this is undoubtedly the use of critical thinking. The better you filter / analyze the content you encounter on the Internet, the easier it is to identify inconsistencies and misconceptions.

For this, you can ask the following:

– What is the source of the news?

– Who is the author / writer?

– Are the above credible?

– When was the news published? Is it recent / crucial?

– Published in more media / from different sources?

– Is the content objective or subjective / biased?

The awareness of the phenomenon, its features, and the ways in which it spreads, is the first precautionary step against news made up for misleading and malicious purposes.


Artificial intelligence in the courts: Myths and reality

By Eleftherios Chelioudakis

Many people think that the term “artificial intelligence” is synonymous to technological development, whilst it is frequently presented as the hope for resolution of serious problems, which plague our societies.

Through its articles, our team has tried to explain to our readers the term “artificial intelligence”, as well as the reason why we should be cautious regarding the developments in this sector.

Through this article we will focus on the frenzy regarding artificial intelligence, we will find out if the idea of its use on the area of justice constitutes a new and innovative approach and at the end we will note issues which merit particular attention. However, it is not the first time that Homo Digitalis focuses its attention on the use of artificial intelligence on the area of justice as we have already hosted an article regarding philosophical issues, which concern the replacement of the judiciary by machines and by means of Machine Learning.

In the information society we live in, the increased use of computers and the internet result in the rampant growth of modern human’s digital footprint. Smart devices, like smartphones, wearable systems which record our sporting activity and our health status, even coffee machines, fridges and toothbrushes, simple household appliances, collect and process a flood of information for its users and demonstrate aspects of their daily life and their personality.

The volume of information produced is so enormous, that it suddenly adds value on the gathered information. Large firms base their business model on the exploitation of these information. The goods and services of these firms are provided “for free” and users “pay” with their personal data, which are analysed and shared with third parties in order to create targeted advertisements, which will lead to profit-making.

As societies, we are heading to the belief that collecting information will bring us closer to acquiring knowledge. As human beings we do not have the intellectual capability to process the vast amounts of information arising; thus, we are placing our hopes on the calculating force of machines. The key that opens the door to control diseases, to combat crime, to better administration of our cities and to our personal well-being is data analysis, the identification of correlations between them and the production of forecasts and comparisons. At least this is the idea we are called to embrace.

Thus, sectors like artificial intelligence, which two decades ago were considered outdated and ineffective, such as the sector of machine learning, suddenly attracted more attention. The modern smartphone, the use of the internet and the computers, the improvement of processors, as well as the increased capacity of the means’ of data storage, gave to the algorithms of machine learning the requisite fuel; large amounts of data.

Serious problems in various sectors, such as health, local governance, self-improvement, transports and policing can be resolved as if by magic through the analysis of the amount of information gathered. Certainly, the judicial system could not be absent from these sectors.

At this point, we should distinguish between the use of mechanisms and artificial intelligence tools for supporting court administration (such as the use of Natural Language Processing tools, which aims to the automation of bureaucratic procedures and rapid writing, registration and analysis of judgements and other documents), and the use of the said mechanisms and tools for the granting of justice and the influence on the decision-making procedure from judicial authorities. This article does not address the first category, as the challenges and restrictions arising there, are the same for other application areas of Natural Language Processing technology. In contrast, the article focuses on the second category and the idea that artificial intelligence mechanisms could abet judicial authorities in the decision-making procedure.

The truth is that the idea of using technology in the decision-making procedure in the area of justice is not pioneering nor innovative. On the contrary, it is a past idea that has appeared and has been used a lot in foreign judicial systems, such as Canada, Australia and U.S.A. already since the end of the past century, while it is commonly known to the similar sector of judicial psychology and psychiatric. Systems such as “Level of Service Inventory-Revised (LSI-R)” and the “Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)” were used as risk assessment tools to help the judge in various stages of the criminal proceedings, such as the suspect’s detention, conviction, imposition of a sentence and the decision to release sentenced on behaviour before serving his/her sentence. Frequently, technologies, which just implement mathematical and statistical methods of risk assessment, are baptised by their creators and media as artificial intelligence. Analyses have been repeatedly carried out over the last few decades for these systems and the critical review according to their solvency and their effectiveness differ depending on which agency finances the related researches.

While the uncertainty about the use of different technologies on justice area has been strongly expressed the Council of Europe (1. Parliamentary Assembly (PACE), Recommendation 2101 (2017) 1, Technological convergence, artificial intelligence and human rights, 2. Parliamentary Assembly: Motion for a recommendation about Justice by algorithm – the role of artificial intelligence in policing and criminal justice systems, and 3. European Commission for the Efficiency of Justice (CEPEJ): European Ethical Charter on the use of artificial intelligence in judicial systems) and the European Union (1. European Commission: Communication on Artificial Intelligence in Europe, and 2. European Commission’s Hight-Level Expert Group on Artificial Intelligence (AI HLEG): Draft Ethics guidelines for trustworthy AI) through their actions during the past two years, explore the possibility of the use of artificial intelligence mechanisms in the area of justice from their Member States.

Although we are against the introduction of artificial intelligence mechanisms into the decision-making procedure and we believe that this approach is not the solution to any problem from those besetting the judiciary, because of the frenzy which prevails during the last few years, we consider it important to mention briefly the main issues emerging from the assumed use of artificial intelligence in the field of justice- especially in criminal proceedings. The enumeration which follows is indicative:

    1. Decision-making solely based on automated processing: as provided by Article 11 of Directive 2016/680, taking a decision based exclusively on automated processing, which produces unfavourable legal effects concerning the data subject or largely affects him, is forbidden. Except in the cases, where the law allows the decision at issue, providing appropriate safeguards to ensure the data subject’s rights and freedoms, as at least the right for human intervention. It is, therefore, recognisable that the human factor is an indispensable component in the decision-making procedure.
    2. Risk of discrimination and quality of the data used: Artificial intelligence mechanisms, which are trained on the basis of processed data are dependent on the quality of these. In simple terms, the provisions for my future behaviour will be based on other people’s data based on which the algorithm has been trained. If the quality of data used during the training is low, or if any prejudice underlies these data, predictions are condemned to be insolvent. They may also be illegal if they are based on personal data, which are by their very nature particularly sensitive as defined in Articles 10 and 11 of Directive 2016/680.
    3. Technical training of judges and lawyers: Before the use of any artificial intelligence mechanism on bearings and the decision-making processing, it is a reasonable prerequisite that professional users of this mechanism who use it on daily basis, are familiar with technology. Unfortunately, most judges and lawyers have a poor technical knowledge and do not have programming capabilities or basic knowledge on the capabilities and the functionality of the different artificial intelligence mechanisms. In light of the above, a basic training of the law practitioners is considered necessary, already starting from bachelor studies and a retraining of judges during their education in judiciary.
    4. Clear rules concerning data ownership used by artificial intelligence mechanisms: Under no circumstances should companies which have created the artificial intelligence mechanism, have access to the personal data of the people tried and the people condemned, nor use them commercially nor for research purposes. Justice cannot be a profit-making sector.
    5. Concise and explicable way of the way of operation of the artificial intelligence mechanism: The area of justice is interwoven with the principles of transparency and impartiality. Therefore, if a judge bases, even partly, his decision on a prediction of some artificial intelligence mechanism, it should be possible to explain the reason, why the mechanism has come to this prediction. If this explanation is not possible, the judge’s decision which was based on it, does not comply with the principle of transparency and will not be considered as impartial. At this point, it should be underlined that popular and complex mechanisms, such as neural networks, make particularly difficult to meet this requisite.
    6. Checking artificial intelligence mechanisms’ effectiveness by independent authorities: A scheduled and regular assessment of the validity of the predictions, made by artificial intelligence mechanisms, should be conducted. An independent supervisory authority with sufficient financial resources and personnel with a high level of knowledge and experience is the ideal body for the fulfilment of this task. The assessments of this authority should be based on both quantitative data, such as statistics, arising out of the efficacy of predictions of artificial intelligence mechanisms and qualitative factors, such as the conclusions taken into consideration case-by-case.

Undoubtedly, the implementation of artificial intelligence mechanisms in any sector of modern life is a complex issue, requiring a multidisciplinary approach. In any event, it is not merely a legal issue. On the contrary, it has intense ethical and social aspects, which require serious contemplation. Rapid technology development is of the utmost importance for the improvement of our life quality and for sure we should integrate it in our society. However, its inclusion should be done with thorough preparation and planning. Only then we will experience new technologies’ benefits and we will severely restrict challenges and dangers for the protection of Human Rights.


The right to access to exams’ written answers

Aikaterini Psihogiou

Preliminary ruling of the Court of Justice of the European Union answered the question, if candidates’ written answers in the context of exams and the related corrections of the examiner constitute personal data, as well as whether the candidate has the rights to access and correction of his writing subsequently to the completion of the examination.

The facts of the case

The application for a preliminary ruling was submitted in the context of legal proceedings between Peter Nowak and the Data Protection Commissioner of Ireland, concerning the denial of the Commissioner to allow to P. Nowak to access his corrected test in an examination that he had participated in, on the ground that the information included on it was not personal data. Having doubts whether a written test is personal data, the Supreme Court of Ireland submitted to the CJEU a request for a preliminary ruling on the interpretation of directive 95/46/EC on the protection of individuals with regards to the processing of personal data and the free movement of such data.

Court’s response

To begin with, Directive 95/46/EC defines as personal data “any information related to an identified or identifiable natural person”. An identifiable person is one who “can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity.”

The use of the expression “any information” lets on, according to the Court, the legislator’s objective to add a broader definition on this term, covering any information either objective or subjective, in the form of opinion or assessment, providing that information relates to the person concerned; in other words, because of its content, its purpose or its result, the information is connected to the specific person.

According to the Court’s reasoning on the said case, the content of the candidate’s written replies in the context of an exam is an indication of the level of knowledge and the candidate’s skills in a given sector, as well as the way of thinking, his/her reasoning and his/her critical eye.

Furthermore, in the event of handwritten exams, the answers provide information relating to candidate’s handwriting.

Moreover, the purpose of collecting these answers is to estimate the candidate’s professional skills and his/her ability to exercise a specific profession.

Lastly, the use of this information is liable to have an impact on the candidate’s rights and interests, as it can determine or affect, for example, the possibilities of access on the profession or his/her desired working position.

As regards the related with candidate’s answers examiner’s corrections, the Court found that they constitute information, which concern the candidate, as the content of these corrections is the examiner’s assessment of the examinee’s capabilities. These corrections are also able to bring him/her consequences.

The Court decided, therefore, that under circumstances such as those in the said case, the candidate’s written answers on exams and the possible relevant corrections of the examiner constitute personal data of the candidate. Accordingly, the candidate has in principle rights of access and correction (Article 9 of the Directive and Article 15,16 of the Regulation 2016/679), both on his written answers and examiner’s rectifications.

The Court, however, clarifies that the right of rectification does not allow the candidate to “correct”, a posteriori, the “wrong” answers, (as the potential mistakes do not constitute an inaccuracy that needs to be corrected, but rather constitute evidence indicating the candidate’s level of knowledge). Finally, the rights of access and correction do not extend to the questions of the exams, which do not constitute candidate’s personal data.

Practically, in Greece, how can a candidate exercise the right of access on his answers and the related examiner’s corrections?

Practically, a candidate can request written or orally from the examining authority to have access to his/her answers and examiner’s corrections. The right of access is exercised free of charge in principle. The candidate could be asked to pay a reasonable charge, if only the request is manifestly ill-founded or excessive (e.g. when it is repeated) or the number of copies that the examining authority is being asked to give is large. The examining authority has no more than one month to satisfy the request -from its submission- and exceptionally this time limit may be extended by up to a further two months.

*Aikaterini Psihogiou, LL.M., CIPP/E, is a Lawyer in Athens, graduate of the Law School of Athens and holder of Master’s degree (Cum Laude) in Law and Technology in Tilburg University. She is CIPP/E certified. She is working as consultant on personal data protection.

Source:http://curia.europa.eu/juris/document/document.jsf?text=&docid=198059&pageIndex=0&doclang=el&mode=lst&dir=&occ=first&part=1&cid=455309

Homo Digitalis supports the important work carried out by the Data Protection Authority. We are optimistic that our action and through continued cooperation we will contribute to its mission.


The interview of Nikos Theodorakis in Homo Digitalis

Homo Digitalis has the honor to host the interview of Nikos Theodorakis, a Greek who excels at both academic and professional level in Europe and America. Mr. Theodorakis is an associate professor at the University of Oxford and a partner at Stanford University, while at the same time practicing law at an American law firm and has served as a consultant to international organizations.

As perceived, he is best suited to talking about the importance of personal data on our business activities, business and everyday life, and we thank him warmly for the interview he gave us.

– You’ve started your academic career in Trade Law, but for some years now you’ve turned to Personal Data and Privacy Protection Law. Is there any relation between trade and personal data?

Undoubtedly! Personal data, which are frequently called the “oil” of the 21st century, is an integral part of every commercial activity. Either as the very centre of online services, or assisting at the contractual supply of goods, personal data is the driving force behind every commercial activity. In the past, trade was a relatively decentralised procedure, however, nowadays, data is used in every commercial activity. Therefore, to me, the transition, in fact the conjugation, from trade law to personal data protection law was a rational and probably necessary choice, taking into consideration the every-increasing importance of data.

– You are working both as a professor and researcher in some of the most important universities internationally and as a lawyer in a large law firm. How is it possible to combine an academic career with practising law?

I have to admit that is a very demanding combination, among other things because it entails frequent travelling between Oxford, Brussels, Athens and New York for academic and professional contributions. However, this “balancing act” really satisfies me as every sector offers you something different: academy is a forum for the exchange of ideas, where you constantly learn, horizontally from your colleagues and vertically from your students, while dealing with legal issues, which need further examination and resolution from the scientific community. Practicing law is more intense, active and pressing as you are requested to solve your client’s problem as soon as possible, in a practical way, while the legal strategy you create is dynamic. The combination of these two contrary occupations makes me evolve, so for now the weariness is certainly worth trying out!

– You are cooperating with very important universities in the US. What is their stance towards GDPR? Should we feel lucky that it exists in Europe or does it simply cause more problems?

The truth is that the Regulation has been largely discussed in the academic community and the legal community in the US, due to the large number of American firms that operate in Europe, through their physical or web presence, and due to the extraterritorial application of the regulation, under conditions. I can say that the dialogue during the last years on the other end of the Atlantic was really productive; actually in recent discussions I had with my colleagues at the law schools of Stanford and Columbia universities, I observe an increasing interest and knowledge on the subject. In fact the Regulation led to an intensive debate of respective initiatives of federalist nature in USA. The first indications have already appeared at the new consumer protection legislation of California and the Cloud Act.

– What do you think is the level of compliance with the GDPR for Greek companies and organisations? Where do we stand comparatively with other countries?

This is a complicated question because we have to distinguish between companies, which have fully complied and those which have taken the basic measures required, probably superficially, leading to a “compliance theatre”. One of the Regulation’s negative effects is that the market has been flooded with professionals who were not experts and were promising that they could help a company to comply fully with the Regulation at a very low cost. However, this is a process that takes time, total structural adjustment regarding the use of data and creation of a substantial way of thinking in support of the data protection. I would say that the minority of companies has substantially complied, a large majority has superficially complied –which leaves for possible risks- and finally a big percentage of companies hasn’t complied at all yet – which is very dangerous.

Which is the role of the citizens in achieving companies’ and organisation’s compliance with the GDPR?

The role of civil society is to be aware and show interest in their rights -as the right to be forgotten and to portability- and exercise them in good faith if they have any doubt or question about how companies process their data. Citizens are the best guardians of this new legislation, as they have to use their strength for improvement and transparency in the use of data. They could also be organised in a coordinated manner, through an organisation like Homo Digitalis, and condemn possible infringed behaviour to the competent body of our country, the Greek Data Protection Authority.

– Can the conferred by the GDPR rights help Greek citizens in practice?

Certainly, as citizens can exercise a series of rights, which give them more and substantial control of their data. The user’s increased control on his data was one of the main reasons, which led to the Regulation, in view of the fact that companies collect and process a wealth of data for us from various sources; accordingly the user must control who is processing his/her data, why and where it is transferring this data. Overall, the rights that the Regulation offers result in augmented transparency and accountability for using Greeks -and European- citizens’ data.

– Both as a professor and a researcher, so as a lawyer you come up against new challenges. Which of them do you think that we will face in Greece and what would you like to be the action of an organisation like Homo Digitalis according to them?

I believe that in the foreseeable future we will face challenges such as data leakage and networks confidentiality breach, massive hacking in conjunction with ransom demands extortion in cryptocurrency, lack of companies’ ability to cope efficiently with users’ requests for exercising their rights, spot checks from prosecution authorities and complexity on how blockchain and artificial intelligence interact, or conflict with the Regulation. An organisation like Homo Digitalis can adopt working documents and organise workshops for the examinations of these challenges.

– How do you expect the relation of technology and human in the future?

It is a fact that the relation between technology and human will continue to be made increasingly complex through the evolution of artificial intelligence and the Internet of Things. Therefore, developments that a few decades ago were figments of scientific imagination are now much closer than we may think.


Can a ban on the use of hyperlinks, leading to libelous content, violate freedom of expression?

By Lefteris Chelioudakis

On 4 November 2018, the European Court of Human Rights (ECHR) adjudged unanimously in its decision in the case Magyar Jeti Zrt v. Hungary that the prohibition to use hyperlinks leading to libelous content may violate the right of freedom of expression.

According to the facts of the case the applicant company (444.hu), which maintains a news website has been found guilty of disposal of libelous material by the national courts of Hungary. The main cause was that they had published an article that was hosting a hyperlink to an interview on Youtube, which was later found to contain libelous content.

Specifically, the bus that was transferring a group of hooligans, on its way to attend a football match, had parked in front of a school. The school had mostly Roma students, and the hooligans started shouting racist chants against them, throwing beer bottles, while one of the hooligans peed in front of the school. The children’s teacher called the police and the hooligans left only when the police arrived in point.

On the same day, the Head of the local Roma community gave an interview for the incident stating that the hooligans were members of the extremely right-wing Hungarian political party “Jobbik”.

The said interview has been made accessible on YouTube. The day after, 444.hu published an article on its website on the incident, attaching a hyperlink to the relevant interview.

In its decision, the ECHR underlined the importance of hyperlinks for the proper functioning of the Internet and made a distinction of hyperlinks from traditional publications, as the ones guide the users to available material and the others provide material.

The Court also found that the Hungarian law on strict liability for libelous material dissemination had excluded the possibility of any substantial assessment of the applicant company’s right on freedom of expression. Therefore, national courts should have examined the case more closely, as the relevant provisions could undermine the flow of information on the internet, preventing the use of hyperlinks by creators and issuers.

Moreover, the ECHR recalled that, for journalists, the protection of the right on freedom of expression of article 10 depends on the principle of good faith and the accuracy of factual elements, so that “reliable and precise” information are provided, according to journalistic ethics. Consequently, the protection provided from the specified right does not cover the possibility of spreading false news.

Lastly, the ECHR stressed that when third-party rights are at stake, it is necessary to achieve a fair balance between freedom of expression, as protected by Article 10 and the right to privacy as protected under Article 8 of the European Convention of Human Rights.

More information on the case can be found on the website The IPKat.


Protection of Personal Data and Sexual Exploitation

By Anastasia Karagianni

The murder of the 21-year old Eleni at Rhodes urged the Greek society to finally face the real “scale” that rape can reach. Rape is not only the sexual contact without the consent of the other person. It is the force to sexual intercourse, the physical violence or a serious and imminent threat, that are undertaken to commit this inadvertent sexual intercourse. Rape can result in death, as we have seen.

Nevertheless, sexual harassment exists also in the digital world. How? In recent times it takes place due to personal data breach. Specifically, studies show that 4% of the adolescents aged 12-17 admit that they have sent sexual messages, which depicted them naked or half-naked, to other users, and 15% of the adolescents confess that they have received such material. This is called “sexting”, namely, the exchange of photographs and messages with mainly sexual content using applications installed in smartphones or other electronic devices. However, sometimes the exchange of these messages is carried out without the consent of the depicted person. In this case, the right to privacy of the person depicted is violated.

What are these personal data? Personal data are information related to a person. They might contain “sensitive” or “non-sensitive” information. This information is becoming personal when connected, directly or indirectly, with the specific person. It therefore concerns different information, which, if gathered together, can lead to identification of an individual. This information, therefore, characterize the biological, physical and mental existence of the person as well as its social, political, financial and cultural existence. In this connection, because of the sexual content of the message, the naked/half-naked picture of the person is considered personal data, as it concerns the user’s sex life.

But how can the infringement of personal data lead to sexual exploitation? Since a picture appears in the internet, it is difficult to control its circulation. In most cases, these photographs are sent within the framework of a confidential relation between the sender and the recipient. So far, this does not create any problem. Problems arise when this relationship is harmed or based on false information. The circulation of this material in a secondary level without the consent of the person depicted, and several times without him/her knowing, to other users, constitutes an infringement of his/her priveacy and violation of his/her sexual integrity, when it takes place in view of leacher acts, and the trafficking of pornographic material.

About two years ago, Lina, 22-years old, committed suicide, falling from the ninth floor of her Student Residence in Thessaloniki. A prosecutor’s investigation has been ordered in order to make an online investigation for traces of possible criminal behaviour for observation, retention and processing of personal data, threats via the Internet for action or tolerance and committing felony association, as it seems that the girl regularly received threats that her personal pictures would be published on the Internet.

The New Regulation for the Protection of Personal Data of the European Union safeguards the right to be forgotten , the right to information and access to data, the right to correction and to objection against their processing.

The Greek legislation and the case-law provides a high level of protection. It only remains to be understood from us.

*If you face problems on the Internet and you are under 18 years old, call 2106007686. Trust the helpline of the National Centre for Safe Internet.


Actions in national and european level regarding e-evidence

Today, Wednesday 5 December 2018, in view of the upcoming meeting of the Council of Justice and Home Affairs of the European Council (6-7 December), 18 organizations sent a letter to all the EU Member States, putting forward their vivid concerns regarding the approach suggested by the Austrian Presidency in the draft Regulation on European production and preservation orders for electronic evidence in criminal matters (“e-evidence”).

Among these organizations are EDRi, Electronic Frontier Foundation, the Council of Bars and Law Societies of Europe – CCBE, Access Now, Privacy International and many national digital rights organizations, including Homo Digitalis.

We believe that the solution proposed by the Austrian Presidency do not manage to adequately address important issues, which arise from the legislation in question. For example, the text:

– greatly reduces the possibility for enforcing authorities to refuse recognition and enforcement of an order on the basis of a violation of the Charter of Fundamental Rights;

– wrongly assumes non-content data is less sensitive than content data, contrary to case law of the Court of Justice of the European Union (CJEU) and the European Court of Human Rights (ECtHR) – notably the CJEU Tele 2 judgment (cf. para.99) and the ECtHR’s case Big Brother Watch and others v. UK (cf. para.355-356);

– contemplates the possibility to issue orders without court validation, disregarding what the CJEU has consistently ruled, including in its Tele 2 judgment (para. 120).

– does not provide legal certainty; and

– undermines the role of executing states, thereby undermining judicial cooperation.

Similar views have been expressed by the European Data Protection Board (EDPB), judges such as German Association of Judges, companies like Internet Service Providers, academia, Bar Associations, the Meijers Committee, among many others.

In the national level, Homo Digitalis submitted today its letter to the Greek Ministry of Justice (Protocol no. 4568/5.12.2018), expressing its concerns for these provisions.

You can find a copy of our letter in Greek here.

You can learn more on the action in the European level here.


8 digital rights organizations ask for transparency regarding the new Data Protection Commissioner of Serbia

Today, 4 December, EDRi, Access Now, APTI, EFN, Epicenter.works, Open Rights Group, Privacy International and Homo Digitalis sent a joint letter to the National Assembly of the Republic of Serbia, requesting a transparent procedure regarding the appointment of the new Data Protection Commissioner of the country.

This is the second action in the Balkans in which Homo Digitalis takes part in, aiming at the provision of adequate safeguards for human rights in the contemporary digital age.

The letter is available here.


The Norwegian Consumer Council files a complaint against Google

On November 27, 2018 the Norwegian Consumer Council filed a complaint against Google. Based on a new study, Google is accused of using deceptive design and misleading information to manipulate its users.

More particularly, Google is accused of tracking users through “Location History” and “Web & App Activity”, which are settings integrated into all Google accounts.

For the users with Android software, such as Samsung and Huawei smartphones users, it is extremely difficult to avoid this tracking.

According to the complaint, some of the techniques used by Google to push the users to share their location are:

Deceptive click-flow: The click-flow when setting up an Android device pushes users into enabling “Location History” without being aware of it. This contradicts legal obligations to ask for informed and freely given consent.

Hidden default settings: When setting up a Google account, the Web & App activity settings are hidden behind extra clicks and enabled by default..

Repeated nudging: Users are repeatedly asked to turn on “Location History” when using different Google services even if they decided against this feature when setting up their phone.

Google’s intention is to elicit users’ consent, so that users agree on being constantly tracked, thus revealing very important aspects of their personalities! Which are these aspects?

What does Google know exactly? Does Google know, for example, if you are in your living room, your bedroom or even your toilet? How many times per minute does it track you? When you take a cigarette break at work is Google there with you? Does Google know when you are on a date? Does it know your religious beliefs? Your health history? Learn more about all these in the official video by the Norwegian Consumer Council. More information can be found here.