Contents:
Business leaders agreed that other key factors for continued growth include good governance in both the corporate and state sectors, and social responsibility among Asian entrepreneurs to ensure that all members of society benefit. The essential story of modern Asia is its unprecedented expansion of economic freedom, enabled by market liberalization. Economic freedom, however, remains substantially repressed across the region.
There are three key policy challenges to expanding economic freedom in Asia today. The first is to open up financial markets, which remain backward and repressed by command economy controls. The second is to renew trade and foreign-investment liberalization, which has stalled since the Asian crisis of the late s. And the third is to open up energy markets, which, even more than financial markets, are throttled by government interventions. Increasing Asian consumption of fossil fuels will increase carbon emissions. Mainstream advocacy of carbon reduction in Asia should be met with skepticism, given its potential to lower growth substantially.
A far better approach is one based on adaptation to global warming through market-based efficiency measures. Diverse political systems can deliver catch up growth. But autocracies are badly fitted to deliver second-generation reforms for productivity- led growth. The latter demands a tighter link between political and economic freedoms. The Asian miracle is not the product of superior technocratic minds who concocted successful industrial policies.
Rather, freedom and prosperity bloomed on Asian soil because government interventions were curtailed and markets unleashed. Classical liberalism, however partially implemented, has worked in Asia. It is a system to which Asians should aspire. Original website for full analysis, http: You are commenting using your WordPress. You are commenting using your Twitter account. This approach creates a web of federal and state level laws that --despite their volume in sheer numbers-- can nevertheless be ineffective due to substantial gaps in legal protections.
Unless an activity is directly regulated within a specific sector, or under state law, it may be left out of regulatory control. Biometrics is an area that does not have its own dedicated sectoral regulation per se, but it does fall under some existing sectoral federal regulations, providing some indirect regulation, and there is also some state-level regulation of biometrics. The US is not without regulation, including biometric regulation, but the existing regulations do not do all that needs to be done in order to accomplish privacy protections on par with, for example, that of the European Union.
At the state level, states also have their own regulations that can sometimes overlap with protections provided at the federal level.
In some cases, state laws can provide regulation for areas not covered under any federal regulation. An additional complicating factor in the US is that federal laws may not always pre-empt state-level laws. HIPAA is a federal regulation that provides a regulatory baseline, but States can pass laws that provide additional protections. The CMIA required that California residents, who are patients, be notified of any medical data breach that involve their data, prior to the existence of any such federal standard. One additional complicating factor to consider regarding state and federal law is that court decisions can expand or contract the interpretation of the laws, or the Constitution.
Decisions can be made in civil or criminal cases. In one example of a criminal court decision regarding biometrics, in , a Virginia state circuit court ruled that a criminal defendant cannot be compelled to disclose a passcode to a smartphone, noting that the passcode would be both compelled and testimonial evidence, and therefore would be protected. Because this decision was made in the state of Virginia, there is some uncertainty about how it might be applied in other states.
As stated earlier, there is not just one overarching law that applies to biometrics in the US. However, biometric data is not specifically called out, and many limitations and loopholes exist in both cases [ 38 ]. In discussing the US federal government use of biometrics, it is important to further discuss the Privacy Act of The Privacy Act is an important baseline federal privacy law. The Act covers nearly all personal records maintained by federal agencies. One section of the Privacy Act also applies to state and local agencies, that is, Section 7, which requires that individuals may not be denied benefits due to non-production of a Social Security Number.
The Privacy Act remains a law of substantial consequence for federal agency privacy practices, including the use of biometrics. The Privacy Act may overlap other sectoral privacy protections. For example, if a federal agency has health information about an individual, that person is entitled to the best protections in both HIPAA and the Privacy Act.
Policy Analysis No. Asia's Story of Growing Economic Freedom of modern Asia is its unprecedented expansion of economic freedom. between political and economic freedoms. The Asian miracle is not the product of su- Asia's Story of Growing Economic Freedom by Razeen Sally. No. June 5, .. happen as a result of macroeconomic “fine- tuning.” As Nobel.
Even though this might sound like it grants European-style Consent mechanisms to individuals, it is not the case. There are twelve statutory exceptions to the Disclosure Prohibition , including an exception for law enforcement requests. The Act also provides for other key FIPs including accounting of disclosures, access, right to amend, and agency record-keeping requirements. One of the most visible ways that Federal law enforcement agencies that use biometrics must comply with the Privacy Act of , and some subsequent information privacy laws, is by publishing descriptions of their record keeping practices in the Federal Register, preparing Privacy Impact Assessments, and following other rules.
This requirement is a key aspect of the privacy provisions of the E-Government Act of US government law enforcement agencies have requirements under the Privacy Act regarding how biometrics may be used to either authenticate, or to verify identity. For identity verification in particular, a hybrid approach combining machine matching, and human examination - is in use at the Federal level, in order to ensure accuracy, and to reduce the existence of high false positives.
However, the hybrid approach is not always in place at the municipal level of law enforcement offices, which can lead to the improper interpretation of biometric analysis results. There has been concern regarding the use of biometrics at the municipal level in a biased and unfair manner, as well as concern regarding mission creep of Federal uses of biometrics in law enforcement areas. In the healthcare sector, the use of biometrics requires increased attention due to the rapid adoption of technologies into the private healthcare providers settings, such as provider clinics, in the US.
It has two separate regulations, the Privacy Rule and the Security Rule. There are no specific technical standards for biometrics, and security rules and procedures would be the same for biometrics and other forms of Protected Health Information. The health biometric playing field is open to state regulation, which would almost certainly be stronger than a HIPAA application.
At the state level, there is increasing legislative activity around the use of biometrics. Of more interest is the direct and intentional regulation of biometric use. Illinois, Texas, and Connecticut have already passed biometric data privacy legislation. Notably, class action lawsuits based on the Illinois Biometric Information Protection Act have been brought against entities that have allegedly not gathered Consent prior to biometric use. Finally, there is also very narrowly focused legislation around the use of biometrics by children at the state level. In a US-centered biometrics use context, two recent self-regulatory efforts bear examination.
In , President Barack Obama initiated an overarching policy program with a direct focus on the privacy of data. The general goal of the MSP was to forge a different way to develop privacy self-regulation. Open, transparent forums in which stakeholders who share an interest in specific markets or business contexts will work toward consensus on appropriate, legally enforceable codes of conduct. Private sector participation will be voluntary and companies ultimately will choose whether to adopt a given code of conduct.
The participation of a broad group of stakeholders, including consumer groups and privacy advocates, will help to ensure that codes of conduct lead to privacy solutions that consumers can easily use and understand. A single code of conduct for a given market or business context will provide consumers with more consistent privacy protections than is common today…. The focus of the first effort was mobile application transparency, specifically, short form privacy notices for mobile devices such as smart phones.
The first point of contention was that prior to the discussions, the parameter of the discussions were not to include government use of biometrics, a requirement from the NTIA that the advocacy groups generally did not agree with, and were given no opportunity to dispute. During the discussions, a second key point of contention was the role of consumer Consent to the collection of biometric information in commercial activities. Industry representatives did not concede that consumers had any relevant role in the issuance of Consent related to Biometric information collection or use.
Therefore, in June , after more than a year of meetings, the privacy, civil liberties, and advocacy groups staged a well-publicized walkout, formally abandoning the NTIA facial recognition stakeholder process. At this point, we do not believe that the NTIA process is likely to yield a set of privacy rules that offers adequate protections for the use of facial recognition technology. We are convinced that in many contexts, facial recognition of consumers should only occur when an individual has affirmatively decided to allow it to occur. Multi-stakeholder processes work best when a broad range of stakeholders with differing viewpoints fully participate.
Most importantly, stay in the room. The final outcome of the NTIA facial recognition proceeding remains murky, however a commercial sector code of conduct now exists, antithetical to the perspective offered by privacy advocates. Regardless, and for the purposes of this article, there are selected points that are worth noting from the NTIA exercise. The NTIA facial recognition effort was a high-profile policy failure, nonetheless.
With a better structure for discussion, more willingness on the part of all participants to find a middle ground, and a more specific use case to discuss, the outcome may have been different. Although the US is a high-income country, its approach to biometric regulation is not as protective as the European Union. The US approach does not offer enough regulatory protections to guide the increasing uses of biometrics. While the Privacy Act of should theoretically provide protection in the case of Federal government uses of biometric technology, the US Government Accountability Office GAO report on the implementation of Federal privacy was not encouraging regarding compliance and transparency in uses by law enforcement.
The US does not have a specific identity authority, nor does it have a formal data protection office. While the US Federal Trade Commission FTC is tasked with enforcement of some consumer protection laws in regards to unfair and deceptive business practices, by no means is the FTC a full-fledged data protection authority. And while the US does have a Department of Transportation DOT , the DOT is similarly not a full-fledged identity authority that has legislation mandating it manage the integrity of the identity of its citizens as its primary focus. The US has not supported the idea of a national digital identity scheme thus far, and the negative reaction to REAL ID is an indicator that further development will require a more privacy-protective legislative approach to the issue.
However, it is unlikely that over the long term the US will be able to be one of the few remaining countries in the world without some form of national digital biometric identification, which means there is much work to be done regarding biometric policy and privacy protections in the US at the federal and state level. Of the three jurisdictions discussed in this paper, each has a completely different framework for the data protection and privacy of biometrics. In the Republic of India, the Aadhaar Act and other legislation does not provide comprehensive data protections and privacy for the Aadhaar program and its use of biometric data.
The EU, as discussed, has an omnibus data protection and privacy policy that is comprehensive and also includes specific language regarding biometrics processing, including automatic processing. As such, the EU has protective data protection and privacy regulations already in place for any member country that builds or employs biometrics, or more broadly, a digital biometric identity system.
While the REAL ID Act does include some aspects of identity systems, it leaves biometric use up to the states, and therefore does not act as a unifying regulatory framework for biometrics or for all digital identity systems. In the US, some data protection for biometrics comes from the Privacy Act of , which has numerous exceptions, some comes from sectoral law, such as HIPAA, and some comes from state law, which is very limited in scope at this time.
Consent is a core issue in regards to biometrics and identity, and amidst the myriad potential issues, Consent is readily among the most contested of them. If there is no fundamental Consent for individuals regarding biometrics and identity, then autonomy and human freedoms can be at risk, depending on existing protections, and how well those protections are enforced. As with the differing standards for privacy, there also is no single standard, global definition in use for Consent regarding use of biometrics.
As discussed elsewhere in this paper, Fair Information Practices provide the baseline for most global privacy law, and although the principles do not cover all privacy rights, it is a globally accepted baseline. In India, the Aadhaar Act and other existing regulations do not provide robust Consent provisions in regards to the collection of biometrics; it should be noted that the Act stands in opposition to the India Supreme Court interim decision regarding voluntariness, a decision that Aadhaar Act contravenes.
This is a foundational problem in India regarding Aadhaar and Consent. Considering health use cases in India specifically, healthcare information is deemed to be sensitive data under India sectoral law. Ideally, well-thought through policies need to be in place to provide meaningful checks and balances for individuals. In India, the early emphasis has been on reducing inefficiencies, not on protecting privacy or autonomy. The loss of autonomy regarding Consent has been deeply felt, and now needs to be addressed.
One example of the difficulty of making Aadhaar mandatory for health services is in the newly-mandatory use of Aadhaar for women and others in India who are being rescued from prostitution, who cannot receive rehabilitative services until they have enrolled in Aadhaar. One prominent legal scholar said the anonymity of these women was the first casualty. Those who want to be rescued from that life already have many hurdles to overcome, not the least of which is social stigma and shame ; the requirement of loss of anonymity in seeking health services adds to the obstacles facing these individuals, and is not acceptable on a human level.
But these vulnerable individuals are not the only casualties of coerced Consent for Aadhaar in India. For example, in the state of Maharashtra mandated that the AEBAS Aadhaar Enabled Biometric Attendance System, which is connected to all central government offices be used in all government-run hospitals in the State. This requirement applied to health workers. Numerous articles about problems and negative reactions among health workers across India have been published. In one hospital, 22 doctors refused to use the biometric attendance system, and by way of protest, were absent from duty, [ 48 ] alleging that the system was discriminatory.
One issue was that the AEBAS system allows for real-time attendance data to be stored in the Aadhaar central database, which employees and officials can view [ 49 ]. The privacy challenges in such a detailed, centralized, transactional database open to external government and employer access are significant. Even though biometrics are involved in both instances, the privacy implications are different. In the India example, there is simply no fundamental privacy redress for affected individuals, and the issue of a lifelong, government-controlled, central tracking database of life, financial, health, and work activities is something that fuels the darkest of Orwellian fears.
In the EU, coerced Consent is a policy issue addressed by law. For example, the following FDA statement relates to patient Consent, and the issue of coercion:. Consent documents should not contain unproven claims of effectiveness or certainty of benefit, either explicit or implicit, that may unduly influence potential subjects. Overly optimistic representations are misleading and violate FDA regulations concerning the promotion of investigational drugs [21 CFR This is foundational to consent that is well-educated by facts, thus creating the ability for an individual to make an informed consent decision.
Other options can, and should be made available, so as to avoid such outcomes, both in technical and policy solutions presented. While the gaining of Consent in biometric use cases is critical, such Consent given does not then translate to a blanket protection of privacy, however, such Consent gained has a proper place in asserting biometric policy. Regarding biometrics-specific consent policies, in the United States, specific biometrics Consent policy exists just in State law.
In Europe, obtaining Consent in general is the basis of most privacy and human rights-focused laws, decisions, and discussion. Biometric data as defined in the GDPR is considered sensitive data, and therefore, will require Consent as part of the sensitive data category. Terms of the GDPR state that all biometric use conditions will require special processing under the sensitive data category. The primary impact of the EU decision to include biometrics data as a sensitive data category in the GDPR is bound to have profound policy impacts in the biometrics world.
The impact will be most keenly experienced by entities based in, or doing business with, Europeans. The use of biometric systems for the identification of patients has already begun in Europe. Healthcare providers within individual EU member countries, for example, Ireland, are introducing the use of biometric into health provider settings.
A typical scenario is that patients will enroll in the biometric system, and provide personal biometric information, for the stated purpose of identity verification, in relation to their record and for anti-fraud purposes. Healthcare providers in EU member countries will have to comply with GDPR requirements in , including those who provide allied services in healthcare settings, which will require attention to processing controls.
In Europe, if a health care provider requires a patient to enroll in single-provider biometric silo which they can do , patients in EU settings should, on the basis of both the existing Data Privacy Directive and the GDPR, receive other supporting privacy rights, such as access, transparency, and correction. And the processing of the biometric data will still have to comply with all applicable EU standards. The US does not have any consolidated regulatory framework across sectors focused only on biometric Consent policies. As discussed earlier, some laws touch on biometrics held by sectoral entities, like the federal government.
But sectoral laws, like the Privacy Act of , do not mention biometrics specifically. The only specific law regarding explicit Consent for biometrics is currently at the state level, for example, the Illinois state law BIPA requiring Consent specifically for biometrics collection. BIPA, however, does not have a complex Consent policy. To find mature Consent policy examples in the US, one has to study policy assertions apart from biometrics. The US Food and Drug Administration FDA has a detailed description of Consent, for example, which specifies all that must be done to ensure that the Consent is meaningful, voluntary, and not coerced.
However, such presentation of a Consent policy could not be interpreted, either directly, or indirectly, as a Consent policy that would fully cover, or apply to the use of digital biometric identity in any simple or straightforward way. When biometrics are used in non-research healthcare settings for authentication or identification, generally the Consent documents for human subjects research rules do not apply.
This is because research Consent documents are generally not required for non-research healthcare provider activities, and research Consent documents are focused on the actual health research, not the identity documentation of the patient or research subject. It is a gap in the regulatory structure. Consent has become a point of contention in US health care settings that require biometric enrollments for patients. The Florida Hospital Association opposed the provision, and raised substantive legal and privacy concerns [ 51 ].
Public hospitals in the US are prevented from mandatory biometric requests due to laws preventing provisioning of treatment based on identification. Biometrics installations at private US healthcare providers such as private hospitals may not be subject to the same requirements, however. However, discussion of biometric template takeover, spoofing or falsifying of biometric identity, full biometric identity takeover, data breach risks, and other significant complications to the patient biometric systems, are almost never included in discussions around implementations [ 55 ].
It is also rare to find media articles mentioning problems with biometrics security in healthcare settings in the US. Later in this article, untraceable biometrics are discussed as an important area for future work that could help attenuate some present and future challenges in this area.
However, how the legislative language is contextualized in terms of definitions of Consent and procedures is what separates the jurisdictions in available privacy protections. Mandatory biometrics use propositions in India need to be addressed directly and with some urgency, and especially so in the health services context. In the case of digital identity systems, formal data protection and privacy legislation is a must; voluntary guidance or voluntary principles are not an acceptable substitute. Other legal jurisdictions generally have either weaker protections, or no protections at all.
As discussed, the US has some federal and state legislation that touches on aspects of either identity or biometrics, and sometimes both, as in the REAL ID Act; however, the US does not have specific, focused federal legislation around the broad use of biometric data. In non-EU jurisdictions, much progress is possible if serious attempts at legislation aimed at improving data protections and privacy specifically for biometrics use, including digital biometric identity data, are undertaken. There is no doubt that economic and cultural differences impact deployment of digital identity systems and biometrics as well as policies around those systems.
The US, for example, will have to take a different approach to legislation than India based on multiple factors such as the structure of existing federal legislation and the state of development of biometrics in each country. However, that is not enough of an excuse for the US and India to avoid working on the challenging issue of passing new legislation. In India in particular, because the Aadhaar is already pervasive and used in a central database, data protection and privacy legislation specific to Aadhaar is important, and urgent, for India to put in place.
Generally, low income, middle income, and high-income countries have different levels of development and may not be able to physically support the same kinds of technologies, systems, or policies. Some countries may not have the same cultural conceptions of individual privacy rights. Nevertheless, despite the many types of legislation that might be appropriate for any given economic jurisdiction or region, several core legislative concepts stand out.
These concepts may be used across cultural and economic boundaries. Digital biometric identity systems have power, and once granted, that power can be used for good or otherwise. Adding biometrics to an identity scheme digital or paper-based simply increases the power of the identity scheme by increasing belief in the accuracy of the system to be able to uniquely identify or authenticate a person.
As such, the Do No Harm mandate is of primary importance in all identity systems, particularly those using biometrics. These principles are important because they are aimed at developing countries; fortunately, these principles do indeed include principles relating to privacy and non-discrimination. However, they do not include a Do No Harm principle. It is the most important missing element of the principles, and the addition of Do No Harm to these principles is of great importance and would improve the principles considerably.
Different political, economic, and cultural contexts exist for digital biometric identity systems, so it can be expected that different types of harm will arise, each unique to the system that it is situated in. In practice, Do no Harm means that biometrics and digital identity should not be used by the issuing authority, typically a government, to serve purposes that could harm the individuals holding the identification.
Nor should it be used by adjacent parties to the system to create harm. Examples of harm include identifying highly sensitive divisions amongst populations such as ethnicity, religion, or place of origin. Just by attaching that data to a unique biometric is a substantive harm in and of itself. To use an identity system to discriminate against, harass, deny services improperly, or otherwise cause harm based on distinctions such as age, gender, or socioeconomic status as revealed by a place of residence constitutes harm.
In India, it is a great harm existing today to provision the delivery of rehabilitative services to women and others attempting to escape prostitution on having been enrolled in the Aadhaar program. As discussed in the Consent section of this paper, the requirement of loss of anonymity in seeking rehabilitative or health services adds to the obstacles facing these individuals and is not acceptable on a human level.
Another type of harm can arise from the politics of identity. Some identity systems have been tied to the politics of a government or an ethnic faction of a government. It is very difficult to de-link identity systems from the government that issues the ID, but every effort should be made to de-link e-ID systems from the politics of the government or faction in power.
This is the ultimate harm, and all efforts should be taken to avoid it in the future. Identity systems, no matter what form they come in, paper or digital, must work for the public good and must do no harm. And identity systems, due to their inherent power, can cause harm when placed into hostile hands and used improperly. Great care must be taken to prevent this misuse. Do No Harm requires rigorous evaluation, foresight, and continual oversight.
Legislating in reverse is extremely difficult. When the technology for the Aadhaar system -- including the collection of biometrics -- was discussed as a potential program, legislation regulating the targeted and limited use of the Aadhaar identity and data should have been put forward as a mandatory step prior to any widespread technical deployment or biometric enrollment of residents. As discussed in this article, although several iterations of acceptable privacy legislation have been drafted in India, including in as the technology was being initially deployed, none of the legislation has passed.
The lack of protective policy from onward has allowed the Aadhaar ID to go from voluntary to now mandatory in many situations without appropriate data privacy protections. As of today, the Aadhaar ID system is subject to considerable mission creep, and there are concerns about how it might be used in the future. It is very unclear if India will pass data protection legislation for the Aadhaar system. When advanced digital biometric ID systems are discussed, Estonia is frequently cited as an examplar of a modern digital identity system in addition to Aadhaar.
Estonia, as a member of the European Union, already had a robust policy system in place before it put its e-ID, or digital identity, technology system in place. Because of the underlying EU data protection and privacy rules, Estonia is obliged to comply with all EU law, including EU data privacy directives. This is beneficial, as future uses of biometric technology at the federal level that are proposed should conceivably be made public prior to their installation and use.
However, this is limited in that Privacy Impact Assessments PIA are published regarding government uses of technologies; also, the publication of a PIA does not guarantee that a bad program will not move forward. The US, as discussed, has widely deployed biometrics in non-federal sectors such as healthcare.
Almost all of these deployments have occurred without specific biometric legislation preceding the deployment of the technology. As discussed in this paper, there is no federal law that protects biometric data specifically collected for example, by schools, hospitals, commercial entities, or other non-federal entities. And when a US federal agency delays its publication of a Privacy Impact Assessment, it makes it nearly impossible for individuals to assess what the federal government is planning.
These guidelines could, for example, cover very narrow use cases where regulatory rules presently do not offer specific guidance related to best practices, conceiving and establishing procedures, and administrative controls. An important policy document to consider comes from the European Data Protection Supervisor EDPS , which, in , published a watershed opinion regarding data ethics and privacy. In many contexts -- more applicable to jurisdictions outside the EU than inside the EU -- there exists interest to support the presence of such discussions.
Structural and financial support for such activities will need to be put into place, or support will need to be provided by the EU Central Authority, or by other countries. These same principles, although initially written as applicable to self-regulatory schemes, can also apply to multi-stakeholder processes with the stated purpose of crafting ethical data use guidelines.
Despite the potential for failure, [ 56 ] it is nevertheless important for industry and consumer-focused stakeholders to convene, allowing each stakeholder to put forward an independent contribution, in order to look at multiple, narrow use-case scenarios regarding biometrics use and data ethics.
In many respects, ethical data use guidelines for very narrow use cases have more possibility of success, particularly when approached from narrow use cases. In all jurisdictions, one important use case could be on ethical data practices around particularly sensitive ethnic data. It would, over the long term, be helpful to have open, joint stakeholder discussions amongst countries with large-scale biometrics installations so as to share solutions, findings from relevant encounters, amassed expertise, discuss concerns and challenges, and engage in forward-thinking policy construction relating to ethics, data protection, and privacy.
The idea of crafting ethical data use guidelines in the area of privacy would need to be inclusive of standards, which could differ markedly depending on geography, Fair Information Practice standards FIPs , key provisions in the GDPR, the ID4D Principles on Identification, among others could potentially be discussed. Digital identity systems and systems that use biometrics need to be designed in such a way that they cannot fail, even when political regimes and the will of legislators do [ 63 ].
This core concept, derived from the Privacy by Design school of thought, is particularly important in the case of biometrically-enhanced digital ID systems. If an individual can be uniquely identified by a strong biometric like an iris scan, there is a great burden on the designers of that system to ensure failsafes for the individuals who hold that identity. This kind of design is becoming more technically possible, but there is not yet a deployment that would sufficiently protect identity holders from abuse of the identity by those in power.
All jurisdictions would benefit from an approach that considers privacy by design in biometric identity systems. However, it is important to note that while all jurisdictions would benefit from an approach that considers privacy by design in biometric identity systems, it should not be seen as a substitute for legislation or other protections. Ann Cavoukian, former Privacy Commissioner of Ontario, Canada, when in office had the prescience to craft and adopt a policy for biometric technology use in the late s [ 66 ]. The protections are remarkable for their time and include use of untraceable biometrics supported by policy.
This came about when the City of Toronto wanted to install biometrics use in order to reduce fraud in public services. Commissioner Cavoukian crafted a policy proposal for the government, and urged formal legislation to enshrine those practices. The biometric in the case of the City of Toronto, it was a finger scan should be encrypted;. The use of the encrypted finger scan should be restricted to authentication of eligibility, thereby ensuring that it is not used as an instrument of social control or surveillance;. The identifiable fingerprint cannot be reconstructed from an encrypted finger scan stored in the database, ensuring that a latent fingerprint that is, one picked up from a crime scene cannot be matched to an encrypted finger scan stored in a database;.
The encrypted finger scan itself cannot be used to serve as a unique identifier;. The encrypted finger scan alone cannot be used to identify an individual that is, in the same manner as a fingerprint can be used ;. Strict controls on who may access the biometric data and for what purposes should be established;. The production of a warrant or court order should be required prior to granting access to external agencies such as the police or government organisations;.
Any benefits data personal information such as history of payments made are to be stored separately from personal identifiers such as name or date of birth. The final legislation also included a specific provision that the full gamut of administrators of the biometric system could implement.
While the final regulation was not as complete as the initial IPC recommendations, it stands as a groundbreaking and forward-looking piece of biometric regulation.
The regulation is important for its technical protections combined with the policy protections of not allowing for biometric reconstruction or transactional tampering. For example, the social assistance data would not be readily accessible by potential employers. The City of Toronto achieved its goal of reducing fraud, and the IPC achieved its goal of protecting consumer privacy. Today many potential opportunities exist to use technical biometric protections in a way that enhances consumer privacy, dignity, and autonomy.
However, the best practices, knowledge, and discussion must be public, ongoing, and robust in order for this to occur. Many additional principles for legislation exist. This has been by no means a complete list. These actions by the government of India have led to a marked lack of protective regulatory controls for the Aadhaar program, which has in turn resulted in profound mission creep and a loss of autonomy. India is a case in point that by the time a deliberative legislature can move a thoughtful bill to passage, a fledgling biometric program may have attained pervasiveness, and thus be very difficult to regulate or remove in backwards motion.
The mission creep and data linkages around the Aadhaar identity number are a high priority to address. Begun as a voluntary identity card, now Indian residents cannot even buy a train ticket without an Aadhaar number, nor can they marry, purchase or own property, or teach; soon banking records and medical records will be tied to the central identifying Aadhaar scheme. In the name of efficiencies or modernization, is it appropriate or desirable to link life activities to a central government database, one without vigorous privacy protections, and without significant constraints on government access to that data?
It has now been since that Aadhaar has been in place, and since since the Indian government has begun greatly expanding Aadhaar linkages. If uses are left to expand uncontrollably, the Aadhaar system could turn into a golden key that could have far too much unchecked control over citizens.
While the introduction of biometrics to sensitive data categorization surprised many in other countries, it was the right choice made at the right time to protect human rights during a time when biometric deployment will increase. For its part, the US system does not have effective, specific legislative protections at the federal level regarding biometrics.
It has limited areas of protections, and the trickle of state law activity could, if increased, serve to bolster protections in some limited areas of biometrics use, but that will not be enough by itself. Going forward, the hope is that smart regulators will heed the warning bells and enact reasonable, privacy-protective legislation now.
If there is one key lesson to be learned, it is that policy development needs to focus on the concept of Do No Harm , and policy should come before technology deployment whenever possible. When it has not been possible prior to the launch of technology, then policy development needs to be a top-line priority thereafter. Biometrics have the ability to create trusted identities, and where that exists in digital, transactional ecosystems, a high degree of risk to fundamental civil liberties and privacy also exists.
It is simply not possible to have a digital ID with biometrics that does not create fundamental risks of surveillance, risks of social and or political control using the system, and the risk of pervasive privacy violations. No matter what the level of economic or legislative development exists for a region, Do no harm must be the bedrock guiding principle of all digital biometric identity systems.
This article does not contain any studies with human participants or animals performed by any of the authors. The data is available for download. McClaughry began fingerprinting all inmates at the Leavenworth, KS, federal prison. These fingerprint records became the beginning of the U.
Interpol collects multiple biometrics, including DNA. Abdullah and Alhijily [ 9 ]. See Chapter 2, Antebellum ID: This can include fingerprints, facial geometry, gait, DNA, or even ear shape. Biometrics-based information can be used to identify one specific person out of many in a one-to-many comparison, or it can be used to verify or authenticate that individual in a one-to-one comparison. Biometric systems are generally set to run in either identification or verification mode.
An example of an identification system would be a law enforcement system that uses a fingerprint to search across millions of stored fingerprints for a match. In biometric discussions, the distinction between identification and verification is important to take into account. Formal definitions of biometrics according to ISO standards are as follows: Information technology, Vocabulary, Part Digital Dividends , World Bank Group. See also Peng et al. Deep learning for face recognition: Selinger and Hartzog [ 15 ].
See an exemplar at Kaur and Neeru [ 16 ]. This book was written by an individual closely associated with installing the program, as such, it is strongly biased. It nevertheless contains important documentary knowledge about how the biometric deployment took place. Wilson [ 20 ]. For an additional discussion of government accountability specific to India, See also: Mukerjee [ 21 ]. For this reason, in some cases, Aadhaar is described as having 16 digits.
See UID to have 12 or 16 digits? Governance Now, April 28, The answer is yes. But the cards are simple, and if lost, they can be reprinted from a computer. See Aadhaar Card Information Page. The biometric information required includes: Photo, 10 finger prints, Iris scan. At this time, DNA information is not required for an Aadhaar card. See also technical illustrations in Introduction.
The key concerns are lack of data protections, and the ability of the government to use the proposed system in ways that could subvert existing civil liberties. Medium, March 22, India needs a strong data protection law. Hindustan Times, April 29, Nilekani, Nandan and Shah, Viral. At the time, the program was promoted as a small subsidy reform program. After two years, it was apparent that Aadhaar had grown in scope and it became a political struggle, however, it was too late.
The politicians who wanted to pass legal protections failed in their quest, and in the vacuum of no legislation, the Aadhaar simply continued enrollments. This report recommends nine fundamental Privacy Principles to form the bedrock of the proposed Privacy Act in India.
These principles, drawn from best practices internationally, and adapted suitably to an Indian context, are intended to provide the baseline level of privacy protection to all individual data subjects. The fundamental philosophy underlining the principles is the need to hold the data controller accountable for the collection, processing and use to which the data is put thereby ensuring that the privacy of the data subject is guaranteed. Report of the Group of Experts on Privacy , Individuals holding public jobs, such as teachers, noted that they had to enroll for the Aadhaar or lose their position.
Regarding external entities, see Clause It provides that nothing in the proposed legislation shall prevent the use of Aadhaar number for establishing the identity of an individual for any purpose, whether by the State or any body corporate or person, pursuant to any law, for the time being in force, or any contract to this effect, but he use of Aadhaar number under this clause shall be subject to the procedure and obligations under clause 8 and Chapter VI of the proposed legislation.
However, there is not widespread compliance with the High Court stipulation. Railways to make Aadhaar mandatory for booking of all tickets , Sept. Provided that, any information that is freely available or accessible in public domain or furnished under the Right to Information Act, or any other law for the time being in force shall not be regarded as sensitive personal data or information for the purposes of these rules.
National Economic Survey, India, —, p, Frontline India dedicated an issue to Aadhaar containing multiple articles on the topic. See also a non-partisan citizen dissent page, Rethink Aadhaar , launched in This agreement, which is complex, allows businesses to function in both jurisdictions despite differences in laws. Privacy Shield Frameworks were designed by the U. Department of Commerce and the European Commission and Swiss Administration to provide companies on both sides of the Atlantic with a mechanism to comply with data protection requirements when transferring personal data from the European Union and Switzerland to the United States in support of transatlantic commerce.
Rights of the Data Subject. Digital Dividends , World Bank. Schwartz, Preemption and Privacy , Yale L. Victims of financial forms of identity theft can use the Federal Fair Credit Reporting Act statute to correct inaccuracies in their affected records, such as credit reports, that are caused by this crime.
But victims of medical forms of ID theft do not have the commensurate right to correct their health care records under HIPAA because HIPAA does not grant patients a specific right of correcting records, even in cases of identity theft. Patients can request to add an amending statement, but they do not have the right to outright delete information from their files held by health care providers, even if inaccurate. HIPAA as a sectoral law does not include the same protections as the Fair Credit Reporting Act which does allow for records correction , thus creating a gap in protection.
See Dixon [ 35 ]. The Commonwealth of Virginia has enacted the Electronic Identity Management Act EIMA , which is not discussed in this paper as it is not a biometric or privacy regulation, but rather it is an identity management bill that focuses on establishing an identity trust framework operator, and provides limitation of liability for providers.
The privacy rules were first issued in and became effective in Hulette [ 37 ]. Border zone biometric uses are legally complex, and require a separate and dedicated analysis. Additionally, statutes focused strictly on technical identity management frameworks have not been analyzed in this article. These kind of statutes can provide important aspects of a legal framework for digital identity ecosystems, typically apart from privacy.