An Introduction to Functional Encryption and its Applications

Background on Public Key Cryptography

Prior to the invention of public key cryptography, two parties that wished to communicate securely over an unsecured network would a priori establish a mutual secret out of band. While this might be acceptable for some small or tightly knit organizations, such solutions are clearly infeasible in today's Internet with billions of users.

Over thirty years ago, Diffie and Hellman introduced the concept of public key cryptography --- radically changing the face of secure communication. In public key cryptography, two parties can securely communicate with each other without having an a prior mutual secret. Knowing only a party's globally shared public key, a user can verify that a message was sent by the party (digital signatures) and confidentially encrypt a message to the intended party (public key encryption).

The work of Diffie and Hellman opened a floodgate of applications and the use of public key cryptography has become ubiquitous. Almost all major software vendors distribute software updates and security patches over the Internet. In addition, digital signatures are a critical tool for enforcing that an update is from the legitimate vendor and not malware. Web browsers use public key encryption (via SSL) to send confidential information (e.g., credit card numbers, passwords) over the web to thousands of online services and modern email readers can send and receive encrypted email.

Functional Encryption

Functional Encryption is a new vision of public key encryption. Traditionally, we view encryption as a mechanism for a user, Alice, to confidentially encode a data to a target recipient Bob. Alice encrypts the data under the recipients public key such that only Bob, with knowledge of his private key, can decrypt it.

However, in some applications, we find we need to share data according to an encryption policy without prior knowledge of who will be receiving the data. Suppose an informant needs to encrypt a message for anyone in a police department's internal affairs office or anyone who is undercover and in the central office -- even though she may not know which officers fit this criteria. The informant will want to encrypt the review with the access policy:
Internal Affairs OR (Undercover AND Central)
In this system, only users with attributes (credentials) that match this policy should be able to decrypt the document. The key challenge in building such systems is to realize security against colluding users. For instance, the encrypted records should not be accessible to a pair of unauthorized users, where one has the two credentials of Undercover and North and the other one has the credential of Central. Neither user is actually an undercover agent in the Central office. Enforcing policy by encryption will become imperative with the emergence of cloud computing and data outsourcing.

Recently, the City of Los Angeles joined Washington, D.C. in outsourcing their information services to Google in order to replace their internal system which is described as slow and antiquated. (Read more in the LA Times.) The biggest obstacle is concerns by the LAPD that sensitive arrest information will be leaked.

We start by rethinking the fundamental concept of encryption. Instead of encrypting to a specified recipient, a user in a functional encryption system embeds a "ciphertext descriptor'' CD during data encryption. In addition, each user in the system will also have a private key (issued by an authority), which is associated with a key descriptor KD. A recipient with a private key descriptor, KD, can decrypt a ciphertext, CT, with descriptor CD if and only if a certain relationship R holds between CD and KD (i.e. R(CD,KD)=true). In the example above we can interpret ciphertext descriptor CD as a boolean formula, f, over a set of boolean variables and let the key descriptor KD be a set S of all attribute variables set to true for that particular user. The relationship R will hold if the set S satisfies the formula f.

Re-envisioning data sharing by functional encryption opens up several new doors. A user encrypting data only needs to describe how they want to share data, but not necessarily identify who they share it with. This flexibility is especially important in systems where authorized users with matching credentials will be added long after the data is encrypted. In addition, the set of matching users that are authorized to receive data might be private information hidden from the encrypting user himself. Alternatively, a functional encryption system could be used as a means for the encryptor to label the information and let keys specify access policies over these labels. Consider a data base system where the ciphertext descriptor CD describes a record's meta data and a user's key embeds a policy over such meta data.

As the concept of public key encryption eliminates the need for a user to establish an a prior secret before encrypting, functional encryption liberates an encryption routine from needing to identify the particular recipients (if any) of data.

Research Directions and FAQs

The concept of functional encryption opens up several research and general questions:

What functional encryption systems can we realize? In 2005, Sahai and Waters introduced a system, known as Attribute-Based Encryption (ABE), where the relation R tested whether a boolean formula was satisfied. Ideally, we would like to be able to let CD describe any program over the user's credentials (i.e., CD describes a Turing Machine).

Is this technology efficient? Has any of it been implemented? Functional encryption is currently being tested in a number of messaging and health record applications. In most realizations, the size of the ciphertexts grows with the complexity of the ciphertext descriptor. Thus, the relative efficiency of the system may depend on the overall complexity of the desired access structure. An open source library including a suite of functional encryption functionalities is publicly available by clicking on the Source Code tab above.

How do we prove functional encryption systems secure under simple assumptions? Since the first realizations of Identity-Based Encryption (IBE), we have witnessed encryption systems with an increasing amount of "structure'' such as Hierarchical IBE and ABE. Unfortunately, the added richness in structure has increased the difficulty in proving these systems secure. Most prior work proves security in the "selective'' model, where we make the (unnatural) assumption that the adversary commits to the ciphertext descriptor CD* of the challenge ciphertext before seeing the system's parameters. Our group has made progress on devising techniques for deriving full (non-selective) security, while using simple number-theoretic assumptions.

How do we encrypt data across multiple administrative domains? In our prior examples, we assumed that private keys were issued from one trusted authority. However, we will often be interested in systems where a user has credentials that cross multiple organizational domains. These organizations will often be very diverse, not have a common authority to trust, and may well not even be mutually aware of each other. For example, a system user, Alice, might have a set of credentials that span from being registered as a student or having good credit, to being connected as a friend of Bob on a social network. Clearly, it is unreasonable to assume that one authority is in a position to administer such a diverse set of domains. How can we encrypt such that the ciphertext descriptor CD specifies an access policy across a diverse set of domains? This is still an open area of research. Again, the main challenge we will face is in preventing collusion attacks.