ICO Investigation Into Police Use of Facial Recognition Technology

ICO head Elizabeth Dunham is reported to have launched a formal investigation into how police forces use facial recognition technology (FRT) after high failure rates, misidentifications and worries about legality, bias, and privacy.

Concerns Expressed In Blog Post In May

In a blog post on the ICO website back in May, Elizabeth Dunham expressed several concerns about how FRT was being operated and managed. For example, although she acknowledged that there may be significant public safety benefits from using FRT, Elizabeth Dunham highlighted concerns about:

  • A possible lack of transparency in FRT’s use by police and how there is a real risk that the public safety benefits derived from the use of FRT will not be gained if public trust is not addressed.
  • The absence of national level co-ordination in assessing the privacy risks and a comprehensive governance framework to oversee FRT deployment.  This has since been addressed to an extent by an oversight panel, and by the appointment of a National Police Chiefs Council (NPCC) lead for the governance of the use of FRT technology in public spaces.
  • The use and retaining of images captured using FRT.
  • The need for clear evidence to demonstrate that the use of FRT in public spaces is effective in resolving the problem that it aims to address, and that it is no more intrusive than other methods.

Commissioner Dunham said that that legal action would be taken if the Home Office did not address her concerns.

Notting Hill Carnival & Football Events in South Wales

Back in May 2017, South Wales and Gwent Police forces announced that it would be running a trial of ‘real-time’ facial recognition technology on Champions League final day in Cardiff. In June, the trial of FRT at the final was criticised for costing £177,000 and yet only resulted in one arrest of a local man whose arrest was unconnected.

Also, after trials of FRT at the 2016 and 2017 Notting Hill Carnivals, Police faced criticism that it was ineffective, racially discriminatory, and confused men with women.

Research

Recent research by the University of Cardiff, which examined the use of the technology across a number of sporting and entertainment events in Cardiff for over a year, including the UEFA Champion’s League Final and the Autumn Rugby Internationals found that for 68% of submissions made by police officers in the Identify mode, the image had too low a quality for the system to work. Also, the research found that the locate mode of the FRT system couldn’t correctly identify a person of interest for 76% of the time.

What Does This Mean For Your Business?

Businesses use CCTV for monitoring and security purposes, and most businesses are aware of the privacy and legal compliance aspects (GDPR) of using the system and how /where the images are managed and stored.

As a society, we are also used to being under surveillance by CCTV systems, which can have real value in helping to deter criminal activity, locate and catch perpetrators, and provide evidence for arrests and trials. It is also relatively common for CCTV systems to fail to provide good quality images and / or to be ineffective at clearly identifying persons and events.

With the much more advanced facial recognition technology used by police e.g. at public events, there does appear to be some evidence that it has not yet achieved the effectiveness that was hoped for, may not have justified the costs, and that concerns about public privacy may be valid to the point that the ICO deems it necessary to launch a formal and ongoing investigation.

Liberty Wins Right To Judicial Review Into Investigatory Powers Act

The fact that Human rights group Liberty has won the right for a judicial review into the Investigatory Powers Act 2016 could mean a legal challenge in the high court as soon as next year.

The Investigatory Powers Act

The Investigatory Powers Act 2016 (also known as the ‘Snooper’s Charter’) became law in the UK November 2016. It was designed to extend the reach of state surveillance and requires web and phone companies (by law) to store everyone’s web browsing histories for 12 months and to give the police, security services and official agencies unprecedented access to that data. The Charter also means that security services, government agencies and police can hack into computers and phones and collect communications data in bulk, and that judges can sign off police requests to view journalists’ call and web records.

Long Time Coming

Liberty was given the general go-ahead by the UK High Court to make a legal challenge against the Investigatory Powers Act in July 2017 and was enabled to do so with the help of £50,000 of crowdfunding raised via CrowdJustice.

Also, Liberty’s challenge is thought to have been helped by the European Court of Justice (in a separate case, represented by Liberty lawyers back in 2016) ruling that the same powers in the old the UK state surveillance law the ‘Data Retention and Investigatory Powers Act’ (DRIPA) were unlawful, and by a ruling by the court of appeal in January 2018 also finding the same thing.

The UK government was, therefore, given until July 2018 to amend or re-write powers to require phone and internet companies to retain data on the UK population.

Part 4 of the Act

The most recent High Court ruling on 29th November gives Liberty the right to a judicial review on part 4 of the Investigatory Powers Act.  This is the part which gives many government agencies powers to collect electronic communications and records of internet use, in bulk, without reason for suspicion.

Concerns About GCHQ’s Hacking

Human rights groups and even Parliament’s Intelligence and Security Committee have become particularly concerned about an apparent shift towards the use of hacking of computer systems, networks and mobile phones for information gathering by intelligence services such as GCHQ in projects such as the ‘Computer Network Scaling’ programme.

What Does This Mean For Your Business?

The UK’s ability to spot and foil potential plots is vital. Although the Investigatory Powers Act may include measures that could help with that, many people and businesses (communications companies, social media, web companies) are still uneasy with the extent of the legislation and what it forces companies to do, how necessary it is, and what effect it will have on businesses publicly known to be snooping on their customers on behalf of the state. The 200,000+ signatures on a petition calling for the repeal of the Investigatory Powers Act after it became law, and the £50,000 crowdfunding raised from the public in less than a week to challenge parts of the Act in the courts, both emphasise the fact that UK citizens value their privacy and take the issues of privacy and data security very seriously.

Liberty is essentially arguing for what it sees as a more proportionate surveillance regime that can better balance public safety with respect for privacy. The government initially believed that this level of surveillance was necessary to counter terrorist groups and threats posed to safety and democracy by other states, but successive legal challenges by Liberty have seen them give some ground. According to the Intelligence and Security Committee, GCHQ is running a project that aims to improve the way that it complies with the Act, and MI5 has also said that it trying to operate more compliantly.  As for any additional oversight of government orders to internet and phone companies, this is estimated to be running about a year behind schedule with IT problems being blamed for the delay.

Data Protection Trust Levels Still Low After GDPR

A report by the Chartered Institute of Marketing (CIM) has shown that as 42% of consumers have received communications from businesses they had not given permission to contact them (since GDPR came into force), this could be a key reason why consumer trust in businesses is still at a low level.

Not Much Difference

The CIM report shows that only 24% of respondents believe that businesses treat people’s personal data in an honest and transparent way.  This is only slightly higher than the 18% who believed the same thing when GDPR took effect 6 months ago.

Young More Trusting

The report appears to indicate that although trust levels are generally low, younger people trust businesses more with their data.  For example, the report shows that 33% of 18-24 and 34% of 24-35 year olds trust businesses with their data, compared with only 17% of over 55s.

More Empowered But Lacking Knowledge About Rights

Consumers appear to feel more empowered by GDPR to act if they feel that organisations are not serving them with the right communications.  For example, the report showed that rather than just continuing to receive and ignoring communications from a company, 50% of those surveyed said that GDPR has motivated them to not consciously opt-in to begin with, or if opted in, make them more likely to subscribe.

This feeling of empowerment was also illustrated back in August in a report based on a study by business intelligence and data management firm SAS.  The SAS study showed that more than half of UK consumers (55%) looked likely to exercise their new GDPR rights within the first year of GDPR’s introduction.

Unfortunately, even though many people feel more empowered by GDPR, there still appears to be a lack of knowledge about exactly what rights GDPR has bestowed upon us. For example, the report shows that only 47% of respondents said they know their rights as a consumer in relation to data protection.  This figure has only increased by 5% (from 43%) since the run-up to GDPR.

What Does This Mean For Your Business?

The need to comply with the law and avoid stiff penalties, and the opportunity to put the data house in order meant that the vast majority of UK companies have taken their GDPR responsibilities seriously, and are likely to be well versed in the rights and responsibilities around it (and have an in-house ‘expert’). Unfortunately, there are always a few companies / organisations that ignore the law and continue contacting people.  The ICO has made clear examples e.g. back in October Manchester-based Oaklands Assist UK Ltd was fined £150,000 by the ICO for making approximately 64,000 nuisance direct marketing calls to people who had already opted out of automated marketing.  This is one example of a company being held accountable, but it is clear from the CIM’s research that many consumers still don’t trust businesses with their data, particularly when they hear about data breaches / data sharing on the news (e.g. Facebook), or continue to have their own experiences of unsolicited communications.

It may be, as identified by the CIM, that even though GDPR has empowered consumers to ask the right questions about their data use, marketers now need to answer these, and to prove to consumers how data collection can actually benefit them e.g. in helping to deliver relevant and personalised information.

The apparent lack of a major impact of GDPR on public trust could also indicate the need for an ongoing campaign to drive more awareness and understanding across all UK businesses.

£385,000 Data Protection Fine For Uber

Ride-hailing (and now bike and scooter-hiring) service Uber has been handed a £385,000 fine by the ICO for data protection failings during a cyber-attack back in 2016.

What Happened?

The original incident took place in October and November 2016 when hackers accessed a private GitHub coding site that was being used by Uber software engineers. Using the login details obtained via the GitHub, the attackers were able to go to the Amazon Web Services account that handled the company’s computing tasks and access an archive of rider and driver information. The result was the compromising (and theft) of data relating to 600,000 US drivers and 57 million user accounts.

The ICO’s investigation focuses on avoidable data security flaws, during the same hack, that led to the theft (using ‘credential stuffing’) of personal data, including full names, email addresses and phone numbers, of 2.7 million UK customers from the cloud-based storage system operated by Uber’s US parent company.

The ICO’s fine to Uber also relates to the record of nearly 82,000 UK-based drivers, including details of journeys made and how much they were paid.

Attackers Paid To Keep Breach Quiet

Another key failing of Uber was that not only did the company not inform affected drivers about the incident for more than a year, but Uber chose to pay the attackers $100,000 through its bug bounty programme (a deal offered by websites and software developers to offer recognition and payment to those who report software bugs), to delete the stolen data and keep quiet about the breach.

Before GDPR

Even though GDPR, which came into force on 25th May this year says that the ICO has the power to impose a fine on a data controller of up to £17m or 4% of global turnover, the Uber breach took place before GDPR.  This means that the ICO issued the £385,000 fine under the Data Protection Act 1998, which was in force before GDPR.

Other Payments and Fines

Uber also had to pay a $148m settlement agreement in a case in the US brought by 50 US states and the District of Columbia over the company’s attempt to cover up the data breach in 2016.

Also, for the same incident, Uber is facing a £533,000 fine from the data protection authority for the Netherlands, the Autoriteit Persoonsgegevens.

What Does This Mean For Your Business?

As noted by the ICO director of investigations, Steve Eckersley, as well as the data security failure, Uber’s behaviour in this case showed a total disregard for the customers and drivers whose personal information was stolen, as no steps were taken to inform anyone affected by the breach, or to offer help and support.

Sadly, Uber joins a line of well-known businesses that have made the news for all the wrong reasons where data handling is concerned e.g. Yahoo’s data breach of 500 million users’ accounts in 2014 followed by the discovery that it was the subject of the biggest data breach in history to that point back in 2013. Similar to the Uber episode is the Equifax hack where 143 million customer details were stolen (44 million possibly from UK customers), while the company waited 40 days before informing the public and three senior executives sold their shares worth almost £1.4m before the breach was publicly announced.

This story should remind businesses how important it is to invest in keeping security systems up to date and to maintain cyber resilience on all levels. This could involve keeping up to date with patching (9 out of 10 hacked businesses were compromised via un-patched vulnerabilities) and should extend to training employees in cyber-security practices, and adopting multi-layered defences that go beyond the traditional anti-virus and firewall perimeter.

Companies need to conduct security audits to make sure that no old, isolated data is stored on any old systems or platforms, thereby offering no easy access to cyber-criminals. Companies may now need to use tools that allow security devices to collect and share data and co-ordinate a unified response across the entire distributed network.

Even though the recent CIM study showed that less than one-quarter of consumers trust businesses with their data security, at least the ICO is currently sending some powerful messages to (mainly large) businesses about the consequences of not fulfilling their data protection responsibilities.  For example, as well as the big fine for Uber, back in October, the ICO fined a Manchester-based company £150,000 for making approximately 64,000 nuisance direct marketing calls to people who had opted out via the TPS, and earlier this month, a former employee of a vehicle accident repair centre who stole customer data passed it to a company that made nuisance phone calls was jailed for 6 months following an ICO investigation.

Facial Recognition For Border Control

It has been reported that the UK Home Office will soon be using biometric facial recognition technology in a smartphone app to match a user’s selfie against the image read from a user’s passport chip as a means of self-service identity verification for UK border control.

Dutch & UK Technology

The self-service identity verification ‘enrolment service’ system uses biometric facial recognition technology that was developed in partnership with WorldReach Software, and immigration and border management company, with support from (Dutch) contactless document firm ReadID.

Flashmark By iProov

Flashmark technology, which will be used provide the biometric matching of a user’s selfie against the image read from a user’s passport chip, was developed by a London-based firm called iProov.  The idea behind it is to be able to prove that the person presenting themselves at the border for verification is genuinely the owner of an ID credential and not a photo, screen image, recording or doctored video.

Flashmark works by using a sequence of colours to illuminate a person’s face and the reflected light is analysed to determine whether the real face matches the image being presented.

iProov is a big name in the biometric border-control technology world, having won the 2017 National Cyber Security Centre’s Cyber Den competition at CyberUK, and winning a contract from the US Department of Homeland Security (DHS) Science and Technology Directorate’s Silicon Valley Innovation Program.  In fact, iProov was the first British and non-US company to be awarded a contract by the DHS to enable travellers to use self-service of document checks at border crossing points.

Smartphone App

The new smartphone-based digital identity verification app from iProov has been developed to help support applications for The EU Settlement Scheme.  This is the mechanism for resident EU citizens, their family members, and the family members of certain British citizens, to apply on a voluntary basis for the UK immigration status which they will need to remain in the UK beyond the end of the planned post-exit implementation period on 31 December 2020.

It is believed that the smartphone app will help the UK Home Office to deliver secure, easy-to-use interactions with individuals.

What Does This Mean For Your Business?

Accurate and secure, automated biometric / facial recognition and identification / i.d. verification systems have many business applications and are becoming more popular.  For example, iProov’s technology is already used by banks (ING in the Netherlands) and governments around the world, and banks such Barclays already uses voice authentication for telephone banking customers.

Biometrics are already used by the UK government.  For example, in the biometric residence permit (BRP) system, those planning to stay longer than 6 months, or apply to settle in the UK need a biometric permit. This permit includes details such as name, date and place of birth, a scan of the applicant’s fingerprints and a digital photo of the applicant’s face (this is the biometric information), immigration status and conditions, and information about access public funds (benefits and health services).

Many people are already used to using some biometric element as security on their mobile device e.g. facial recognition, fingerprint, or even Samsung’s iris scanner on its Note ‘phablet’. Using a smartphone-based i.d. verification app for border purposes is therefore not such a huge step, and many of us are used to having our faces scanned and matched with our passports anyway as part UK border control’s move towards automation.

Smartphone apps have obvious cost and time savings as well as convenience benefits, plus biometrics provide a reliable and more secure verification system for services than passwords or paper documents. There are, of course, matters of privacy and security to consider, and as well as an obvious ‘big brother’ element, it is right that people should be concerned about where, and how securely their biometric details are stored.

Jail For Car Accident Data Thief

An employee at a vehicle accident repair centre who stole the data of customers and passed it to a company that made nuisance phone calls has been jailed for 6 months following an investigation by the Information Commissioner’s Office (ICO).

Used Former Co-Worker’s Login To Company Computer

The employee of Nationwide Accident Repair Services, Mustafa Kasim, used a former co-workers’ login details to access software on the company computer system (Audatex) that was used to estimate repair costs.  The software also stored the personal data (names and phone numbers) of the owners of the vehicles, and it was the personal data of thousands of customers that Mr Kasim took without the company’s permission, and then passed on to a claims management company that made unsolicited phone calls to those people.

ICO Contacted

Mr Kasim was unmasked as the data thief after the Accident Repair Company noticed that several clients had made complaints that they were being targeted by nuisance calls, and this led to the decision to get the ICO involved.

During the investigation, it was discovered that Mr Kasim continued to take and pass on customer data even after he started a new job at a different car repair organisation which used the same Audatex software system.

First With A Prison Sentence

What makes this case so unusual is that it is the first prosecution to be brought by the Information Commissioner’s Office (ICO) under legislation which carries a potential prison sentence.

Computer Misuse Act

Even though the ICO would normally prosecute in this kind of case under the Data Protection Act 1998 or 2018 with penalties of fines rather than prison sentences, in the case of Mr Kasim it was judged that the nature and extent of the criminal behaviour required making a wider range of penalties available to the court.  It was decided, therefore, that s.1 of the Computer Misuse Act 1990 would be used in the prosecution, and it was the offences under this that resulted in the 6-month prison sentence that Mr Kasim received.

What Does This Mean For Your Business?

Since preparing for GDPR, many companies have become much more conscious about the value of personal data, the importance of protecting customer data, and the possible penalties and consequences of failing to do so.  In this case, the ICO acknowledged that reputational damage to affected companies whose data is stolen in this way can be immeasurable e.g. Nationwide Accident Repair Services and Audatex. The ICO also noted the anxiety and distress caused the accident repair company’s customers who received nuisance calls.

This case was also a way for the ICO to send a powerful message that obtaining and disclosing personal data without permission is something that will be taken very seriously, and that the ICO will push boundaries and be seen to use any tool at its disposal to protect the data protection rights of individuals. The case also serves as a reminder to businesses that looking at ways to provide the maximum protection of customer data and plug any loopholes is a worthwhile ongoing process, and that threats can come from within as well as from cyber criminals on the outside.

EU’s Web Copyright Directive Could Spell Trouble

A vote in January on contentious new EU copyright laws could negatively impact tech platforms and all online publishers, create risky legal grey areas for many businesses, stifle freedom of expression, and lead to more surveillance and control.

What Copyright Law?

There will be a final vote in January 2019 on an EU Directive on Copyright in the Digital Single Market. According to EU leaders, the intention in creating the new Directive is to modernise copyright laws across all EU member states, and to take account of how people now share and distribute content in the digital age.

The change in the law essentially puts a greater obligation for policing copyright infringement on those companies distributing content / making works available online e.g. tech giants, rather than on individuals for downloading it.

Key Articles

The likely introduction of the new Directive has provoked arguments, many of which relate to 2 particular articles in the Directive. These articles are Article 11, which states that new websites should be able to charge web firms for sharing links to their content, and Article 13, which puts the onus on news websites to work with copyright holders in order to prevent users from uploading content that they don’t own the rights to.

In the case of Article 11, for example, some commentators have asked whether this will mean that there will be difficulties placing a value on articles / article extracts, whether headlines and snippets will require licensing, whether payment will be required for hyper-linking (a ‘link tax’), and whether platforms publishing news e.g. Google, will have to pay compensation for providing headlines and extracts. These kinds of complications and costs could, therefore, discourage the distribution and sharing of news content, and could even discourage the sharing of things like holiday photos on the internet.

For Article 13, the requirement for the installation of ‘content filters’ to help web firms stop users sharing copyrighted content, which appears to be targeted at platforms like Facebook, Twitter and YouTube, has led critics to say that how people use the web doesn’t appear to have been taken into account by lawmakers. Article 13 could, therefore, mean that too much legitimate content is removed, thereby negatively affecting user experience, and sharing news articles online and finding good quality journalism online could become more difficult for web users.

Other Concerns

Several other concerns have been raised about the contents of the new EU copyright Directive, including:

  • This being a potential step towards the transformation of the internet from an open sharing platform to a tool for the automated surveillance and control of users.
  • Content filters creating a kind of censorship, and doubts over whether automatic content filters will be able to detect things like fair comment, satire, criticism and parody.

What Does This Mean For Your Business?

Although the stated intention for the change in the law appears to be a good one, for the tech giants this law change represents greater responsibility and control being placed upon them. The potential for increased costs and legal grey areas could create more risks for any company that simply wants to freely report or share news content. The news directive has, therefore, been widely criticised and greeted with suspicion e.g. back in June, 70 of the biggest names of the internet, including, Tim Berners-Lee, and the Wikipedia founder, Jimmy Wales, signed an open letter citing worries about the news directive being simply used as a way for governments to exert more control and extend surveillance. The use of automatic content filters and the threat of charges for using even headlines, snippets and even hyperlinks certainly look as though they could limit free speech, and discourage and deter the sharing of information and news in a way that could work against the interests of many businesses and organisations.

It remains to be seen how January’s vote goes and whether Brexit will mean that the UK will actually be subject to the directive.

New Political Ad Transparency Rules Tested With Pro-Brexit Website

No sooner had Facebook announced new rules to force political advertisers to prove their identities and their ad spend than an anonymous pro-Brexit campaign website with a massive £257,000 ad spend was discovered.

Mainstream Network

The anonymous website and campaign identified only as ‘Mainstream Network’ was discovered by Campaign group 89up. Clicking on the Facebook adverts by Mainstream Network takes users to a page on their local constituency and MP, and clicking from there was found to generate an email to their MP requesting that the Prime Minister should abandon her Chequers Brexit deal. It has also been discovered that a copy of each of the emails is sent back to Mainstream Network.

11 Million People Reached

Campaign group 89up estimate that the unknown backers of Mainstream Network must have spent in the region of £257,000 to date on the Facebook adverts, which 89up estimate could have reached 11 million people.

What’s The Problem?

The problem with these political adverts is that Facebook has recently announced new rules in the UK that require anyone wishing to place an advert relating to a live political issue, promoting a UK political candidate, referencing political figures, political parties, elections, legislation before Parliament and past referenda that are the subject of national debate, to prove their identity, and prove that they are based in the UK. Policing this should involve obtaining proof of identity and where they are based e.g. by checking a passport / driving licence / resident permit. According to Facebook, any political adverts must also carry a “Paid for by” disclaimer to enable Facebook users to see who the adverts are from, and the “Paid for by” link next to each advert should link through to a publicly searchable archive of political adverts showing a range of the ad’s budget and number of people reached, and the other ads that Page is running, and previous ads from the same source.

GDPR Breach Too?

It is also believed that sending a copy of the email back to Mainstream Network, in this case, could also constitute a breach of GDPR.

First Job For Facebook’s Nick Clegg

What to do about Mainstream Network and their campaign could end up being the first big task of Facebook’s newly appointed global communications chief and former deputy PM Sir Nick Clegg. It’s been reported that Mark Zuckerberg himself and Facebook’s chief operating officer Sheryl Sandberg were personally involved in recruiting Mr Clegg given the importance and nature of the role.

What Does This Mean For Your Business?

After Facebook announced new rules to ensure political ad-transparency, the discovery of Mainstream Network’s anonymous adverts and the scale of the ad spend and reach must be at the very least embarrassing and awkward for Facebook, and is another piece of unwanted bad publicity for the social network tech giant. Whatever a campaign of this kind and scale is for, Facebook must really be seen to act in order to retain the credibility of its claims that it wants political ad transparency, not to lose any more of the trust if its users and advertisers, and to avoid being linked with any more political influence scandals.

Facebook has recently faced many other high profile problems including how much tax it pays, the scandal of sharing user details with Cambridge Analytica and AggregateIQ (over the UK referendum), a fine by the ICO for breaches of the U.K.’s Data Protection Act, and a major hack, and is perhaps with all this in mind that it has hired a former politician and UK Deputy Prime minister. Some political commentators have also noted that it may be very useful for Facebook to have a person on-board who knows the key players, who has reach and is able to lobby on Facebook’s behalf in one of its toughest regulatory areas, the European Union.

New Tech Laws For AI Bots & Better Passwords

It may be no surprise to hear that California, home of Silicon Valley, has become the first state to pass laws to make AI bots ‘introduce themselves’ (i.e. identify themselves as bots), and to ban weak default passwords. Other states and countries (including the UK) may follow.

Bot Law

With more organisations turning to bots to help them create scalable, 24-hour customer services, together with the interests of transparency at a time when AI is moving forward at a frightening pace, California has just passed a law to make bots identify themselves as such on first contact. Also, in the light of the recent US election interferences, and taking account of the fact that AI bots can be made to do whatever they are instructed to do, it is thought that the law has also been passed to prevent bots from being able to influence election votes or to incentivise sales.

Duplex

The ability of Google’s Duplex technology to make the Google Assistant AI bot sound like a human and potentially fool those it communicates with is believed to have been one of the drivers for the new law being passed. Google Duplex is an automated system that can make phone calls on your behalf and has a natural-sounding human voice instead of a robotic one. Duplex can understand complex sentences, fast speech and long remarks, and is so authentic that Google has already said that, in the interests of transparency, it will build-in the requirement to inform those receiving a call that it is from Google Assistant / Google Duplex.

Amazon, IBM, Microsoft and Cisco are also all thought to be in the market to get highly convincing and effective automated agents.

Only Bad Bots

The new bot law, which won’t officially take effect until July 2019 is only designed to outlaw bots that are made and deployed with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving.

Get Rid of Default Passwords

The other recent tech law passed in California and making the news is a law banning easy to crack but surprisingly popular default passwords, such as ‘admin’, ‘123456’ and ‘password’ in all new consumer electronics from 2020. In 2017, for example, the most commonly used passwords were reported to be 123456, password, 12345678 and qwerty (Splashdata). ‘Admin’ also made number 11 on the top 25 most popular password lists, and it is estimated that 10% of people have used at least one of the 25 worst passwords on the list, with nearly 3% of people having used the worst password, 123456.

The fear is, of course, that weak passwords are a security risk anyway, and leaving easy default passwords in consumer electronics products and routers from service providers has been a way to give hackers easier access to the IoT. Devices that have been taken over because of poor passwords can be used to conduct cyber attacks e.g. as part of a botnet in a DDoS attack, without a user’s knowledge.

Password Law

The new law requires each device to come with a pre-programmed password that is unique to each device, and mandates any new device to contain a security feature that asks the user to generate a new means of authentication before access is granted to the device for the first time. This means that users will be forced to change the unique password to something new as soon as the device is switched on for the first time.

What Does This Mean For Your Business?

For businesses using bots to engage with customers, if the organisation has good intentions, there should not be a problem with making sure that the bot informs people that it is a bot and not a human, As AI bots become more complex and convincing, this law may become more valuable. Some critics, however, see the passing of this law as another of the many reactions and messages being sent about interference by foreign powers e.g. Russia, in US or UK affairs.

Stopping the use of default passwords in electrical devices and forcing users to change the password on first use of the item sounds like a very useful and practical law that could go some way to preventing some hackers from gaining easy access to and taking over IoT devices e.g. for use as part of a botnet in bigger attacks. It has long been known that having the same default password in IoT devices and some popular routers has been a vulnerability that, unknown to the buyers of those devices, has given cyber-criminals the upper hand. A law of this kind, therefore, must at least go some way in protecting consumers and the companies making smart electrical devices.