Tredisec Primitives
About Tredisec
TREDISEC is a European collaborative Research and Innovation Action that leverages existing or novel cryptographic protocols and system security mechanisms, which offer strong data confidentiality, integrity and availability guarantees while permitting efficient storage and data processing across multiple tenants. Started on April 1st 2015, the ultimate goal of TREDISEC is to converge to a unified framework where resulting primitives are integrated, while following the end-to-end security principle as closely as allowed by functional and non-functional requirements.

TREDISEC is a European collaborative Research and Innovation Action that leverages existing or novel cryptographic protocols and system security mechanisms, which offer strong data confidentiality, integrity and availability guarantees while permitting efficient storage and data processing across multiple tenants.
From a practical standpoint, the ambition of this project is to develop systems and techniques that make the cloud a secure and efficient place to store data. We plan to step away from a myriad of disconnected security protocols or cryptographic algorithms, and to converge instead on a (possibly standardized) single framework where all objectives are met to the highest extent possible.
Started on April 1st 2015, the ultimate goal of TREDISEC is to converge to a unified framework where resulting primitives are integrated, while following the end-to-end security principle as closely as allowed by functional and non-functional requirements.
Project Facts
- Project Title
- Trust-aware, REliable and Distributed Information SEcurity in the Cloud
- Project Acronym
- TREDISEC
- Call
- H2020-ICT-2014-1
- Topic
- ICT-32-2014 Cybersecurity, Trustworthy ICT
- Type of Action
- Research and Innovation action
- Grant Agreement no.
- 644412
- Duration
- 36 months
- Date of Start
- 1st April, 2015
- Budget
- 6.470.618,94 €
Learn more about the Project Consortium
Project Abstract
The current trend for data placement shows a steady shift towards "the cloud". The advent of cloud storage and computation services however comes at the expense of data security and user privacy.
To remedy this, customers nowadays call for end-to-end security whereby only end-users and authorized parties have access to their data and no-one else. This is especially true after the outbreak of data breaches and global surveillance programs last year.
In the Tredisec project, we address this problem and we develop systems and techniques which make the cloud a secure and efficient heaven to store data. We plan to step away from a myriad of disconnected security protocols or cryptographic algorithms, and to converge on a single framework where all objectives are met.
More specifically, Tredisec addresses the confidentiality and integrity of outsourced data in the presence of a powerful attacker who controls the entire network. In addition, our proposed security primitives support data compression and data deduplication, while providing the necessary means for cloud providers to efficiently search and process encrypted data.
By doing so, Tredisec aims at creating technology that will impact existing businesses and will generate new profitable business opportunities long after the project is concluded.
Objectives
Objective 1
Designing novel end-to-end security solutions for scenarios with conflicting functional and security requirements
- Supporting data reduction: enabling cloud providers to perform data reduction (e.g., deduplication and compression) without compromising the confidentiality of outsourced data.
- Enabling secure data processing: focusing on new techniques that enable the processing of encrypted data in an efficient and privacy-preserving manner, guaranteeing efficient data processing that scales with large amounts of outsourced data.
- Enhancing data availability and integrity : ensuring the availability and the integrity of outsourced data against misbehaving cloud providers, allowing users to verify that, relying only on low capacity devices such as smart-phones. This entails that the verification process performed by the end-user should not be greedy in terms of either bandwidth or computation.
- Ensuring user isolation in multi-tenant systems : identifying platform and operating system primitives that provide strong isolation guarantees to individual user’s workloads, and integrate these solutions into current and future infrastructures such that it only minimally impacts their performance and efficiency.
Objective 2
Implementing a unified framework to support the orchestration of the security mechanisms in different scenarios.
- Once different security mechanisms have been designed, our ultimate goal would be to produce a realistic system where various features will work in a holistic manner.
- The integration of the newly proposed features as part of a unique architecture requires a delicate design of the various system components in order to prevent any possible incompatibilities that might arise between them.
By devising and evaluating such primitives, and the framework to orchestrate them we will foster the concepts of "security and privacy by design", which in turn will provide strong incentives for small and medium businesses to securely store and process their outsourced data in the cloud.
Atos
Atos SE (Societas Europaea) is a leader in digital services with 2014 pro forma annual revenue of circa € 11 billion and 93,000 employees in 72 countries.

Atos SE (Societas Europaea) is a leader in digital services with 2014 pro forma annual revenue of circa € 11 billion and 93,000 employees in 72 countries.
Serving a global client base, the Group provides Consulting & Systems Integration services, Managed Services & BPO, Cloud operations, Big Data & Security solutions, as well as transactional services through Worldline, the European leader in the payments and transactional services industry. With its deep technology expertise and industry knowledge, the Group works with clients across different business sectors: Defense, Health, Manufacturing, Media & Utilities, Public Sector, Retail, Telecommunications and Transportation.
Atos operates under the brands Atos, Atos Consulting and Technology Services, Atos Worldline and Atos WorldGrid.
Atos Research & Innovation (ARI) is the research, development and innovation hub of Atos and it is a key reference for the whole Atos group, delivering technology innovation to our customers. Our background of 27 years participating in EC projects helps us reinforce our links with our customers by bringing research outcomes to Atos' customers and empowering our role as source of innovative ideas.
With presence in Spain, Belgium, Slovakia and Turkey, ARI performs research on emergency management, security, transport, health, technology enhanced learning, trust, identity, semantics, media, services, GIS and smart object technologies.
Atos is a founding member of the European Technology Platform NESSI (Networked European Software and Services Initiative), and is member of eSafety Forum, ARTEMIS, NEM, EOS and Nanomedicine, as well as of the Spanish platforms Logistop for Integral Logistics, eMOV for mobility, Railway, Maritime, eSEC for security, PROMETEO for embedded systems, PLANTETIC for ICT and electronics and es.Internet for Future Internet technologies. Atos is also a member of the refunded NET!WORKS ETP (industrial core group for the 5G PPP). Atos is a major partner in Future Internet-related initiatives, including industrial ones such as EFII (European Future Internet Initiative) and FIRA (Future Internet Research Alliance), as well as project-oriented groups, like the ones established by FIA, FIRE and FInES. Furthermore Atos is member of some cybersecurity initiatives, such as the International Cyber Security Protection Alliance (ICSPA), the Spanish Industrial Cybersecurity Center (CCI) and the Cloud Security Alliance (CSA).
Arsys Internet S.L.
Arsys Internet S.L. was founded in 1996 as an Internet Access Provider and progressively diversified its activity to include other Internet Services, taking advantage of the rapid evolutionary trend of the sector. Eventually, Arsys became an Internet Service Provider (ISP), which continues to be the core of its business activity. Arsys is part of 1&1 Internet AG, United Internet Group.

Arsys Internet S.L. was founded in 1996 as an Internet Access Provider and progressively diversified its activity to include other Internet Services, taking advantage of the rapid evolutionary trend of the sector. Eventually, Arsys became an Internet Service Provider (ISP), which continues to be the core of its business activity. Arsys is part of 1&1 Internet AG, United Internet Group.
Arsys is a leading Spanish Cloud Service Provider offering Internet solutions for companies and SOHOs. Pioneer in Cloud Hosting in Europe through its commitment to innovation, Arsys provides an easy integration of Information Technology into businesses with a wide range of Web Presence, Managed Hosting and Infrastructures services.
Arsys offers a full suite of Internet-related services that include Cloud hosting, web hosting, dedicated servers, virtual private servers (VPS), domain name registration, email, marketing online tools, e-commerce, and other complementary services, such as Cloud storage and backup online.
With 330,000 customer contracts, Arsys has a staff of over 300 employees and manages three Data Centers in Spain, hosting over 200,000 web pages and 1.4 million email accounts.
Arsys is a wholly owned subsidiary of 1&1 Internet AG, United Internet Group, a public company with a market cap of more than 5 Billion Euros. With around 13.3 million fee-based customer contracts and around 31 million ad-financed free accounts, United Internet AG is the leading European internet specialist. The heart of United Internet is the high-performance Internet Factory with over 6,845 employees, of which 2,000 are engaged in product management, development and data centers. In addition to the high sales strength of its established brands GMX, WEB.DE, 1&1, united-domains, Arsys, Fasthosts, InterNetX, Sedo and affilinet, United Internet stands for outstanding operational excellence with around 44 million customer accounts at 7 data centers with around 70,000 servers.
IBM Research GMBH
IBM Research - Zurich (formerly known as Zurich Research Laboratory), with approximately 300 employees, is a wholly-owned subsidiary of the IBM Research division with headquarters at the T.J. Watson Research Center in Yorktown Heights, NY, USA. IBM Research - Zurich, which was established in 1956, represents the European branch of IBM Research.

IBM Research - Zurich (formerly known as Zurich Research Laboratory), with approximately 300 employees, is a wholly-owned subsidiary of the IBM Research division with headquarters at the T.J. Watson Research Center in Yorktown Heights, NY, USA. IBM Research - Zurich, which was established in 1956, represents the European branch of IBM Research.
At IBM Research - Zurich scientific and industrial research is conducted in five scientific and technical departments: Science and Technology, Systems, Storage, Computer Science, and Mathematical and Computational Sciences. Main research topics are nanotechnology, advanced server and storage technology, security, privacy, risk and compliance, computational biochemistry and materials science, chip cooling technologies, business optimization and transformation.
IBM Research - Zurich is world-renowned for its outstanding scientific achievements - most notably Nobel Prizes in Physics in 1986 and 1987 for the invention of the scanning tunneling microscope and the discovery of high-temperature superconductivity, respectively. Other key innovations include; the Trellis-coded modulation, which revolutionized data transmission over telephone lines; Token Ring, which became a standard for local area networks and a highly successful IBM product; the Secure Electronic Transaction (SET) standard used for highly secure payments; and Smartcard JavaCard™ technology.
IBM Research - Zurich is dedicated not only to fundamental research, but also to exploring and creating innovative industry and customer-oriented solutions based on several key areas including; future chip technology; nanotechnology; supercomputing; security and privacy; risk and compliance as well as business optimization and transformation. The Zurich laboratory is involved in more than 80 joint projects with universities throughout Europe, in research programs established by the European Union and the Swiss government, and in cooperation agreements with research institutes of industrial partners.
Eidgenoessische Technische Hochschule Zuerich
The Swiss Federal Institute of Technology Zurich (ETH Zurich) is an institution of the Swiss Confederation dedicated to higher learning and research. ETH Zurich has a central infrastructure for, and considerable experience with, the administration of EU-projects; in particular, since the start of the 6th Framework Programme of the European Union, ETH Zurich has been involved in 178 EU-projects.

The Swiss Federal Institute of Technology Zurich (ETH Zurich) is an institution of the Swiss Confederation dedicated to higher learning and research. ETH Zurich has a central infrastructure for, and considerable experience with, the administration of EU-projects; in particular, since the start of the 6th Framework Programme of the European Union, ETH Zurich has been involved in 178 EU-projects.
The System Security Group (http://www.syssec.ETH.ch), headed by Srdjan Capkun consists of 14 researchers, working on different aspects of system and network security. The focus of the group’s research is the development of foundations and primitives that enable security and privacy in applications of wireless networks and distributed systems. The group focuses on the design and analysis of security protocols, system prototyping and experimental validation of solutions.
The group collaborates with several international academic and industrial partners. Srdjan Capkun is also a member of the Zurich Information Security Center (ZISC, http://www.zisc.ETH.ch), a cooperation between members of ETH Zurich and industry, with the aim of providing a coordinated program of state-of-the-art research and education in information security.
The System Security Group at ETH Zurich has a strong background in the design and analysis of security protocols for wireless and wireline networks. The group has developed several prototypes, has an excellent track record on publications.
NEC Europe LTD
NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilise the company's experience and global resources, NEC's advanced technologies meet the complex and everchanging needs of its customers.

NEC Corporation is a leader in the integration of IT and network technologies that benefit businesses and people around the world. By providing a combination of products and solutions that cross utilise the company's experience and global resources, NEC's advanced technologies meet the complex and everchanging needs of its customers.
NEC brings more than 100 years of expertise in technological innovation to empower people, businesses and society. NEC Europe was founded in 1995 as a subsidiary of NEC Corporation and holds itself 15 subsidiary organizations all over Europe, which all build upon NECs heritage and reputation for innovation and quality by providing its expertise, solutions and services to a broad range of customers, from telecom operators to enterprises and the public sector.
NEC Laboratories Europe is a research laboratory established by NEC Europe Ltd. and is located in Heidelberg, Germany. NEC Labs Europe conducts leading research and development across IT and communications, including Future Internet, next generation fixed and mobile networks, security and privacy technologies, the Internet-of-Things, multimedia and smart energy services. NEC Laboratories Europe has provided solutions for Identity and Access Management, processing of encrypted data and for mobile device security.
EURECOM
Eurecom is a graduate school of engineering and research institute in telecommunications located in Sophia Antipolis, France. It is a consortium of industrial and academic members including Institut Télécom, Télécom ParisTech, EPFL, Politecnico di Torino, Helsinki University of Technology (Aalto University), Technische Universität München (TUM), Norwegian University of Science and Technology (NTNU), Swisscom, Thales, SFR, Orange, Symantec, STEricsson, Cisco, BMW Group, SAP and Monaco Telecom.

Eurecom is a graduate school of engineering and research institute in telecommunications located in Sophia Antipolis, France. It is a consortium of industrial and academic members including Institut Télécom, Télécom ParisTech, EPFL, Politecnico di Torino, Helsinki University of Technology (Aalto University), Technische Universität München (TUM), Norwegian University of Science and Technology (NTNU), Swisscom, Thales, SFR, Orange, Symantec, STEricsson, Cisco, BMW Group, SAP and Monaco Telecom.
Eurecom is a member of the SCS Pôle de compétivité. Eurecom employs 80 scientists in three research departments: networking and security,multimedia and mobile communications. Eurecom has participated in several European and national research projects and is currently involved in more than 45 national and international research projects.
Eurecom 's security and privacy group has a strong body of previous research in designing security protocols within self-organizing mobile and opportunistic networks and in devising and analyzing secure large-scale distributed systems.
Greek Research And Technology Network S.A.
GRNET is the National Research and Education Network for Greece, providing network connectivity to all universities and research centres in Greece, and also connecting the Greek academia with the GÉANT pan-European network. GRNET has also provided computational resources, first in the form of Grid computing and more recently with Infrastructure as a Service cloud computing.

GRNET is the National Research and Education Network for Greece, providing network connectivity to all universities and research centres in Greece, and also connecting the Greek academia with the GÉANT pan-European network. GRNET has also provided computational resources, first in the form of Grid computing and more recently with Infrastructure as a Service cloud computing. In particular, GRNET has developed and provides as a production service Okeanos (http://okeanos.grnet.gr), a computing and storage cloud infrastructure that services more than 6000 users providing more than 8000 virtual machines. GRNET has significant development experience, developing software for both in house needs and in the context of numerous European projects in which it has participated. It has also developed and offers a secure, verifiable, secure, e-voting platform, Zeus (http://zeus.grnet.gr) that has been used for more than 120 elections involving more than 22000 voters to date.
SAP SE
SAP has grown to become the world's leading provider of business software solutions. With 12 million users, 96,400 installations, and more than 1,500 partners, SAP is the world's largest inter-enterprise software company and the world's third-largest independent software supplier, overall. SAP solutions help enterprises of all sizes around the world to improve customer relationships, enhance partner collaboration and create efficiencies across their supply chains and business operations.

SAP has grown to become the world's leading provider of business software solutions. With 12 million users, 96,400 installations, and more than 1,500 partners, SAP is the world's largest inter-enterprise software company and the world's third-largest independent software supplier, overall. SAP solutions help enterprises of all sizes around the world to improve customer relationships, enhance partner collaboration and create efficiencies across their supply chains and business operations.
SAP industry solutions support the unique business processes of more than 25 industry segments, including high tech, retail, manufacturing and financial services. Via HORIZON2020 projects SAP bridges the gap between open, collaborative research with external partners and exploitation into new or existing SAP product lines through SAP's development groups. In the context of this document, SAP refers to SAP AG and his Product Security Research unit.
The 40+ researchers of the Product Security Research unit focus on security engineering (e.g., the automation of the secure software development lifecycle), secure business execution (e.g., business process security and security in cloud-based business applications) and secure operations (e.g., secure maintenance and support of complex and heterogeneous cloud IT landscapes). Recent results include, among many others, a searchable encrypted cloud database, an attack monitoring framework for ERP systems, a security validator for business processes, visualization and enforcement of security constraints, sticky policies for cloud-based applications, and cloud-based secure multi-party computation schemes for optimization problems in distributed supply chains. The Product Security Research team has a long history of leading European collaborative research projects to success (15+ projects in FP7) and is actively contributing to shaping the security research agenda for Europe as part of, for example, the ongoing EU ICT projects A4CLOUD, WEBSAND, PRACTICE, ANIKETOS and POSECCO.
IDEMIA
IDEMIA is the global leader in trusted identities for an increasingly digital world, with the ambition to empower citizens and consumers alike to interact, pay, connect, travel and vote in ways that are now possible in a connected environment.

IDEMIA is the global leader in trusted identities for an increasingly digital world, with the ambition to empower citizens and consumers alike to interact, pay, connect, travel and vote in ways that are now possible in a connected environment.
Securing our identity has become mission critical in the world we live in today. By standing for Augmented Identity, we reinvent the way we think, produce, use and protect this asset, whether for individuals or for objects. We ensure privacy and trust as well as guarantee secure, authenticated and verifiable transactions for international clients from Financial, Telecom, Identity, Security and IoT sectors.
With close to €3bn in revenues, IDEMIA is the result of the coming together of OT (Oberthur Technologies) and Safran Identity & Security (Morpho). This new company counts 14,000 employees of more than 80 nationalities and serves clients in 180 countries.
EU-Funded Cybersecurity and Privacy investment shortening the gap from research to innovation
The CSP Innovation Forum 2015 was organised by the European Commission, DG CNECT (Unit H4 Trust & Security) and the CSP Forum along 28th and 29th of April.
https://www.cspforum.eu/2015/
The CSP Innovation Forum 2015 was organised by the European Commission, DG CNECT (Unit H4 Trust & Security) and the CSP Forum along 28th and 29th of April.
https://www.cspforum.eu/2015/
Over 40 top tech, EU funded, trust and security projects, including TREDISEC (Trust-aware, Reliable and Distributed Information Security in the Cloud), with focused research activities in hot topical areas such as mobile devices technologies and tools, cloud security, criptography and trustworthy network and services infrastructures are being showcased live at a major EU Innovation forum this week, were over 500 cyber security and privacy experts, project leaders, industry, academics and visionaries are also pooling their knowledge to create a safer and more secure ICT environment.
TREDISEC project to enhance end-to-end security in untrusted cloud environments
TREDISEC will develop new systems and techniques which make the cloud a secure and efficient haven to store and process data. The objective is to step away from a myriad of disconneted security protocols or cryptographic algorithms, and to converge on a single framework where all objectives are met.
TREDISEC will develop new systems and techniques which make the cloud a secure and efficient haven to store and process data. The objective is to step away from a myriad of disconneted security protocols or cryptographic algorithms, and to converge on a single framework where all objectives are met.
The project has received funding from the European Commission under the Information and Communication Technologies (ICT) theme of the Horizon 2020 framework programme (H2020-ICT-2014-1). The project started in April 2015, coordinated by Atos with partners NEC Europe (United Kingdom), IBM Research (Switzerland), Eurecom (France), Arsys (Spain), GRNET (Greece), SAP (Germany) and Morpho (France). More information about the project is available at www.tredisec.eu.
For information, please contact the project coordinator, Ms. Beatriz Gallego-Nicasio at beatriz.gallego-nicasio@atos.net.
Objectives
Objective 1
Designing novel end-to-end security solutions for scenarios with conflicting functional and security requirements
Objective 2
Implementing a unified framework to support the orchestration of the security mechanisms in different scenarios.
Objective 1
Designing novel end-to-end security solutions for scenarios with conflicting functional and security requirements
- Supporting data reduction: enabling cloud providers to perform data reduction (e.g., deduplication and compression) without compromising the confidentiality of outsourced data.
- Enabling secure data processing: focusing on new techniques that enable the processing of encrypted data in an efficient and privacy-preserving manner, guaranteeing efficient data processing that scales with large amounts of outsourced data.
- Enhancing data availability and integrity : ensuring the availability and the integrity of outsourced data against misbehaving cloud providers, allowing users to verify that, relying only on low capacity devices such as smart-phones. This entails that the verification process performed by the end-user should not be greedy in terms of either bandwidth or computation.
- Ensuring user isolation in multi-tenant systems : identifying platform and operating system primitives that provide strong isolation guarantees to individual user’s workloads, and integrate these solutions into current and future infrastructures such that it only minimally impacts their performance and efficiency.
Objective 2
Implementing a unified framework to support the orchestration of the security mechanisms in different scenarios.
- Once different security mechanisms have been designed, our ultimate goal would be to produce a realistic system where various features will work in a holistic manner.
- The integration of the newly proposed features as part of a unique architecture requires a delicate design of the various system components in order to prevent any possible incompatibilities that might arise between them.
By devising and evaluating such primitives, and the framework to orchestrate them we will foster the concepts of "security and privacy by design", which in turn will provide strong incentives for small and medium businesses to securely store and process their outsourced data in the cloud.
Project Structure
Work Breakdown Structure
TREDISEC is designed and structured from its conception to accomplish and satisfy the mission and objectives described in About Tredisec.
To this end, the work plan is organised in 7 work packages, whose interdependencies and relations are depicted in the following figure.
Work Breakdown Structure
TREDISEC is designed and structured from its conception to accomplish and satisfy the mission and objectives described in About Tredisec.
To this end, the work plan is organised in 7 work packages, whose interdependencies and relations are depicted in the following figure.

- WP2 will describe the use cases as well as their technological and business context, in order to elicit a set of requirements that will underlie the TREDISEC architectural model.
- These requirements will act also as the internal clock of the project, synchronizing the research and development activities of WP3 – WP6.
- WP3 – WP5 will conduct innovative research that spans the whole architecture of the project and will implement the security mechanisms, services and components that permit integrating security&trust aspects within the currently deployed cloud ecosystems.
- WP6 will implement the TREDISEC framework, enabling the services and components developed in WP3, WP4 and WP5 to seamlessly work together.
- The required coordination between the results of WP3 – WP5 and the integration work in WP6 will be achieved through three major Milestones: at M20 with the design of the primitives, at M30 with the implementation of the primitives and the unified framework and at M33 with the deployment of them into the use cases evaluation environment.
- WP6 also envisages deploying the TREDISEC framework within test cloud environments which will, provide the necessary support for the evaluation of TREDISEC in the real use cases.
- Finally, WP7 will coordinate communication, exploitation and dissemination activities.
Roadmap

- MS1: Use cases and scenario context definition due at M6.
- MS2: Consolidated requirements and architectural models due at M12.
- MS3: Design of the security primitives and framework due at M20.
- MS4: Implementation of the security primitives and the framework due at M30.
- MS5: Deployment of the Use Cases Evaluation environment due at M33.
- MS6: Final evaluation of TREDISEC due at M36.
Arquitecture
Architectural concept
TREDISEC aims to design, implement and deploy a set of security properties and solutions (in the form of security primitives) related to security and privacy in cloud (e.g. secure storage, access control, secure deletion, etc.). More in particular, the core feature of the proposed security primitives is that they do not only preserve the security and privacy of providers and TREDISEC users but also improve efficiency and cost effectiveness of the cloud systems where they are deployed.
Initially, there were identified three different architectures that we could use for providing security for the Cloud: security-as-a-service, end-to-end security and an hybrid approach as a combination of both. After analysing these approaches we thought end-to-end security is the best strategy in order to increase the security of the systems so they are the basis for the security primitives to be designed and developed in the project and provided by the TREDISEC Framework.
The TREDISEC Architecture has been designed in order to fulfil all the requirements identified in the first stage of the project while providing user-friendly functionalities for the creation, management and use of the security primitives.
The TREDISEC Framework
The TREDISEC Framework is a component that allows the creation, use, management and deployment of security primitives in a target cloud. It provides an online packaging of security primitives to be used by the different roles identified (End-User, Security Expert Engineer, TREDISEC Framework Administrator and the Security Technology Providers) together with tools for specific functionalities (e.g. user interface for managing it, testing and deployment component for testing the security primitives and do their deployment, etc.).
The framework offers three operational modes: development, maintenance and provisioning. The first covers the design, development and testing of the security primitives along its lifecycle (from security primitive pattern to TREDISEC Recipe), the maintenance mode covers the functionality and lifecycle of the update, refining, extension, etc. of the different artefacts of TREDISEC. Finally, the deployment phase covers the functionality and lifecycle of the TREDISEC Recipes into the target cloud.
Development mode
This mode covers the phases of creation of the security primitives, the TREDISEC Recipes, their testing and preparation. Here, both the Security Expert and the Security Technology Provider participate, using and creating different artefacts of the security primitives, which are described in the next section.
Maintenance mode
In this mode the Security Technology Provider is able to modify a security primitive in order to solve an error, unexpected behaviour, add new functionalities, create a new composition of security primitives or add a new implementation of the security or performance solution provided by the security property, etc.
Provisioning mode
The Provisioning Mode refers to the process of applying a security primitive to the targeted Cloud for its use. These artefacts are deployed and configured in a semi-automatic way according to the parameters specified by the user for each particular instantiation (system under development).
Security Primitives
The Security Primitive artefact describes a security solution in a cloud system together with performance capabilities. Due to the constraints and different characteristics that this artefact has to provide, represent and the different implementations it can have (together with the specific configurations) we have designed an architecture (see Figure 4) for the security primitives that cover all their requirements defined in the project.There are different phases associated to the security primitives from their initial design and definition till they are deployed into a cloud system, the so-called Security Primitives Lifecycle depicted in Figure 5. Two roles interact mainly in this lifecycle: the security expert engineer and the security technology provider.
The initial and basis artefact, the security primitive pattern, is created by the security expert engineer using the TREDISEC Framework, her expertise, knowledge, and security and performance cloud information. This way she creates a security solution for a cloud system (together with performance capacities) and stores it in the repository of the security primitives of the framework.
Following, the security technology provider obtains a security primitive and provides an implementation along with specific information of the implementation (refining information already provided by the security technology provider about the solution but in a more high-level) and also information of the cloud system where this security primitive is targeted to. The result of this process is the security primitive implementation, which is also stored in the repository of the TREDISEC Framework. Next, this artefact is tested in the testing environment, which reproduces the cloud system, characteristics and requirements of the targeted cloud system where the security primitive will be used, and the security technology provider updates and refines it. Also, if necessary, the security technology provider can send feedback to the security expert engineer if an update is necessary at a higher level.
Finally, when the security primitive implementation has been tested it is stored in the TREDISEC Framework as a TREDISEC Recipe, which not only contains the implementation of the security primitive but also information for deployment, requirements, etc. in the targeted cloud.
More details about the TREDISEC architecture can be found in deliverables D2.3 and D2.4Use Cases
TREDISEC will advance the state of the art in cloud computing by adding security-oriented features on cloud services. To describe these features and to arrive at a set of requirements for them, the TREDISEC end-users as well as the industrial partners developed a set of representative typical usage scenarios where end-to-end security is required, and serve as a basis to derive functional and non-functional requirements. The final set of use cases consists of 6 scenarios driven by four partners of the consortium and covers the full spectrum of technologies to be provided by TREDISEC.
Partner | Cloud Technology | Use Case | TREDISEC Challenge |
---|---|---|---|
GRNET | ~Okeanos Synnefo | UC 1: Enhance Storage Efficiency Securely | Storage Integrity Verifiable Ownership Data Confidentiality and Deduplication Secure Enforcement of Policies in Clouds |
UC 2: Multi-Tenancy and Access Control | Access Control models for multi-tenancy Resource Isolation in Multi-Tenant Systems Secure Enforcement of Policies in Clouds |
||
ARSYS | CloudBuilder GlusterFS | UC 3: Optimised WebDav service for confidential storage | Storage Integrity Access Control models for multi-tenancy Resource Isolation in Multi-Tenant Systems Data Confidentiality and Deduplication Secure Enforcement of Policies in Clouds |
MORPHO | Outsourced cloud computations on Biometric data | UC 4: Enforcement of Biometric-based Access Control | Processing Verifiability Optimizing Encryption for Data Outsourcing Privacy Preserving Primitives for Data Processing |
UC 5: Secure Upgrade of Biometric Systems |
Optimizing Encryption for Data Outsourcing Privacy Preserving Primitives for Data Processing |
||
SAP | Outsourced legacy databases with encryption | UC 6: Database Migration into a Secure Cloud | Resource Isolation in Multi-Tenant Systems Data Provisioning Optimizing Encryption for Data Outsourcing |
Mentions in the Spanish press about TREDISEC project
A summary of mentions in the spanish press about TREDISEC project published up to date.
- Press release published in GPSNews.es, online magazine specialized in Communication and PR.
- Press release published in CIO.es, website specialized in IT trends.
- Press release published in strategicpartner.es, IT CIO.es and techWEEK.es, online channels belonging to ITMedia Network, communication platform addressed to IT professionals.
- Press release published in elcandelerotecnologico.com, website specialized in IT news.
- Press release published in Datacenter Dynamics, information provider addressed to professionals of data centres sector.
- Press release published in ITSeguridad.es, specialized website in CyberSecurity.
TREDISEC has been present in the Cloud World Forum set 24th / 25th June in London, as a courtesy of CSP Forum
The Cloud World Forum event has been taken place at the Olympia Grand in London between 24-25 June and has this year grown its audience and floor plan by 30%.
The Cloud World Forum event has been taken place at the Olympia Grand in London between 24-25 June and has this year grown its audience and floor plan by 30%.
Cloud World Forum is EMEA’s largest Cloud expo. Thousands of delegates come from more than 70 countries around the world to meet the industry’s leading solution providers. Now celebrating its seventh year, the show gathers the pivotal payers of the Cloud revolution and features 16 content theatres. More than 300 speakers from multinationals, SMEs, public sector organisations, online players, regulators, telcos and analysts are set to take the floor in engaging, thought-provoking keynotes, hands-on labs, brainstorming sessions and live demos over two days.
TREDISEC has been present in this event by courtesy of CSP forum, that has offered its space to exhibit flyers of the project.

First Project Workshop: Use cases definition
The TREDISEC consortium set the first project internal workshop for the definition of the Use Cases, in the context of WP2 Requirements and Architecture for a Secure, Trusted and Efficient Cloud.
The objective of this workshop was to have a clearer description of the context scenarios, their technical requirements and define clearly the set of use cases that will be used for evaluation of the TREDISEC technical developments.
The TREDISEC consortium set the first project internal workshop for the definition of the Use Cases, in the context of WP2 Requirements and Architecture for a Secure, Trusted and Efficient Cloud.
The objective of this workshop was to have a clearer description of the context scenarios, their technical requirements and define clearly the set of use cases that will be used for evaluation of the TREDISEC technical developments.
This workshop lasted a 1 full day meeting, hosted by GRNET in their offices in Athens: SAP, ATOS, ARSYS, EURECOM, IBM, MORPHO, NEC y GRNET attended the meeting.
Learn more about the TREDISEC Use Cases at http://www.tredisec.eu/content/use-cases
Several members of the TREDISEC consortium are going to participate in the 1st Workshop on Security and Privacy in the Cloud
From 28th to 30th of September 2015, it will be held the 1st Workshop on Security and Privacy in the Cloud in Florence, Italy, in conjunction with the IEEE Conference on Communications and Network Security (CNS 2015).
The goal of this workshop is to bring together researchers and practitioners who are interested in discussing the security, privacy, and data protection issues emerging in cloud scenarios, and possible solutions to them.
From 28th to 30th of September 2015, it will be held the 1st Workshop on Security and Privacy in the Cloud in Florence, Italy, in conjunction with the IEEE Conference on Communications and Network Security (CNS 2015).
The goal of this workshop is to bring together researchers and practitioners who are interested in discussing the security, privacy, and data protection issues emerging in cloud scenarios, and possible solutions to them.
The workshop seeks submissions from academia, industry, and government presenting novel research, as well as experimental studies, on all theoretical and practical aspects of security, privacy, and data protection in cloud scenarios.
Several members of the TREDISEC consortium will attend to this workshop as Program Chair, (Elli Androulaki, IBM Research), and as part of the Program Committee (Ghassan Karame, NEC Labs; Florian Kerschbaum, SAP and Melek Önen, EURECOM).
TREDISEC is a leading project in research, development and innovation in security in the cloud, and has involved some of the leading european experts in this field. NEC Labs, SAP, and EURECOM are recognized researchers, well-positioned in the current state-of-the-art advances that has brought the boost of cloud environment for IT sector.
For more information: http://www.zurich.ibm.com/spc2015/
Post in arsys blog about its collaboration with TREDISEC project
Arsys, partner belonging to the TREDISEC project consortium, has published today in its blog a post explaining the main features of the project, and how Arsys is going to collaborate on it.
http://www.arsys.info/seguridad/arsys-se-incorpora-al-proyecto-europeo-t...
Arsys, partner belonging to the TREDISEC project consortium, has published today in its blog a post explaining the main features of the project, and how Arsys is going to collaborate on it.
http://www.arsys.info/seguridad/arsys-se-incorpora-al-proyecto-europeo-t...
TREDISEC participates in the e-Democracy 2015 conference
The 6th International Conference on E-Democracy (e-Democracy 2015, http://www.edemocracy2015.eu), will take place in Athens, Greece, 10-11 December 2015, with a special session on research conducted within European R&D projects related to e-Democracy and e-Participation, e-Government, Security, Privacy and Trust, e-Crime, e-Fraud and Digital Forensics.
The 6th International Conference on E-Democracy (e-Democracy 2015, http://www.edemocracy2015.eu), will take place in Athens, Greece, 10-11 December 2015, with a special session on research conducted within European R&D projects related to e-Democracy and e-Participation, e-Government, Security, Privacy and Trust, e-Crime, e-Fraud and Digital Forensics.
In this special session, each participating project can present its main research challenges&results in a short presentation and with a poster which will be exhibited in or near the conference room.
Participating projects will also be invited to submit an extended abstract of 2 pages, that will be published in e-Democracy 2015 conference proceedings.
Accepted paper at the eDemocracy 2015 Conference
The paper/extended abstract entitled "TREDISEC: Trust-aware reliable and distributed information security in the cloud" has been accepted for presentation at the conference and for inclusion in the proceedings, subject to registration to the conference.
The paper is a joint work of the TREDISEC consortium, led by Melek Önen (Eurecom).
More information in http://www.edemocracy2015.eu/
The paper/extended abstract entitled "TREDISEC: Trust-aware reliable and distributed information security in the cloud" has been accepted for presentation at the conference and for inclusion in the proceedings, subject to registration to the conference.
The paper is a joint work of the TREDISEC consortium, led by Melek Önen (Eurecom).
More information in http://www.edemocracy2015.eu/
Atos has published "enabling trusted European cloud"
Atos has published "enabling trusted European cloud", a document that aims to clarify those fundamental issues that affects cloud developments in Europe, and reassure potential cloud customer that there are ways of steering through them.
It proposes a roadmap in which all parties, including customers, need to be involved to achieve a vibrant and successful cloud environment that is fit for the European purpose.
Atos has published "enabling trusted European cloud", a document that aims to clarify those fundamental issues that affects cloud developments in Europe, and reassure potential cloud customer that there are ways of steering through them.
It proposes a roadmap in which all parties, including customers, need to be involved to achieve a vibrant and successful cloud environment that is fit for the European purpose.
Atos throurh ARI (Atos Research&Innovation) has run a number of projects that contribute to the concept of Trusted European Cloud. They have run many projects, many supported by funding from the EC, that span security, privacy and cloud.
TREDISEC appears as one of these projects, mentioned in page 34. TREDISEC is focused on research about securing data access in multi-tenant storage systems.
Enabling trusted European cloud
Atos has published "enabling trusted European cloud", a document that aims to clarify those fundamental issues that affects cloud developments in Europe, and reassure potential cloud customer that there are ways of steering through them.
It proposes a roadmap in which all parties, including customers, need to be involved to achieve a vibrant and successful cloud environment that is fit for the European purpose.
Atos has published "enabling trusted European cloud", a document that aims to clarify those fundamental issues that affects cloud developments in Europe, and reassure potential cloud customer that there are ways of steering through them.
It proposes a roadmap in which all parties, including customers, need to be involved to achieve a vibrant and successful cloud environment that is fit for the European purpose.
Atos throurh ARI (Atos Research&Innovation) has run a number of projects that contribute to the concept of Trusted European Cloud. They have run many projects, many supported by funding from the EC, that span security, privacy and cloud.
TREDISEC appears as one of these projects, mentioned in page 34. TREDISEC is focused on research about securing data access in multi-tenant storage systems.
D1.5. Innovation Strategy and Plan
This document establishes the strategy, processes, milestones and role assignments to ensure an innovation-driven research and development in the TREDISEC project.
This document establishes the strategy, processes, milestones and role assignments to ensure an innovation-driven research and development in the TREDISEC project.
D1.1. Project Quality Assurance Plan
This document serves two purposes: (i) establishing a framework for the project coordination team to effectively carrying out all management activities and monitor the project for current and future risks and avoid negative effects, and (ii) being a handbook for every member of the project consortium to conduct their contractual project activities with a high quality level, as well as easing their collaborative work.
This document serves two purposes: (i) establishing a framework for the project coordination team to effectively carrying out all management activities and monitor the project for current and future risks and avoid negative effects, and (ii) being a handbook for every member of the project consortium to conduct their contractual project activities with a high quality level, as well as easing their collaborative work.
D7.1. TREDISEC Public Website
This document accompanies the website to present it, in its current state, and summarises the Web technology that empowers the TREDISEC public website, its design and content structure.
This document accompanies the website to present it, in its current state, and summarises the Web technology that empowers the TREDISEC public website, its design and content structure.
TREDISEC mentioned in CSP Forum Newsletter of July 2015
CSP Forum has published its newsletter of July 2015.
TREDISEC is mentioned among the new projects related to cibersecurity that has started along last quarter.
CSP Forum has published its newsletter of July 2015.
TREDISEC is mentioned among the new projects related to cibersecurity that has started along last quarter.
Available the ICT2015 networking session with the collaboration of TREDISEC
Atos Spain will chair a networking session in the event of reference ICT 2015, organized by European Commsion to promote the knowledge of technological research financed with european funds. the selected projects are TREDISEC; WITDOM and PRISMACLOUD, will be discussed the following topic : "Key challenges in end-to-end privacy/security in untrusted environments".
Currently it´s available the networking session schedule with all details int the website of the event:
Atos Spain will chair a networking session in the event of reference ICT 2015, organized by European Commsion to promote the knowledge of technological research financed with european funds. the selected projects are TREDISEC; WITDOM and PRISMACLOUD, will be discussed the following topic : "Key challenges in end-to-end privacy/security in untrusted environments".
Currently it´s available the networking session schedule with all details int the website of the event:
https://ec.europa.eu/digital-agenda/events/cf/ict2015/item-display.cfm?i...
There will be three talks given by three speakers, one for every project, that will deal with a specific key challenge related to the main topic.
Ghassan Karame, (NEC laboratories), well-known expert on the subject of the debate will attend from TREDISEC and give the talk titled: “Data protection versus storage efficiency and multi-tenancy”
In Ghassan´s words, the talk approach would be: “Implementing existing end-to-end security solutions unfortunately cancels out the advantages of the cloud technology such as cost effective storage. We will talk about the challenges resulting from the combination of security, functional and non- functional requirements such as storage efficiency and multi-tenancy.”
New mentions in spanish press about TREDISEC
Atos takes part in TREDISEC, the UE project to improve security in the cloud.
Atos takes part in TREDISEC, the UE project to improve security in the cloud.
ICT 2015 networking session: “Key challenges in end-to-end privacy/security in untrusted environments”
The H2020 projects WITDOM (www.witdom.eu), TREDISEC (www.tredisec.eu) and PRISMACLOUD (www.prismacloud.eu), addressing to the H2020-ICT-2014-1 call, organize a joint networking session at the ICT 2015 – Innovate, Connect, Transform event, held on October 22nd 2015 at 14:50CET in Room 8 of Centro de Congressos de Lisboa, Lisbon (Portugal).
The H2020 projects WITDOM (www.witdom.eu), TREDISEC (www.tredisec.eu) and PRISMACLOUD (www.prismacloud.eu), addressing to the H2020-ICT-2014-1 call, organize a joint networking session at the ICT 2015 – Innovate, Connect, Transform event, held on October 22nd 2015 at 14:50CET in Room 8 of Centro de Congressos de Lisboa, Lisbon (Portugal). The joint networking session will discuss challenges to both security and end-users´ privacy when outsourcing data to untrusted environments, such as privacy protection, integrity, data storage efficiency or multi-tenancy.
The networking session organized by WITDOM, TREDISEC and PRISMACLOUD is also supported by the project WISER (www.cyberwiser.eu) from the call H2020-DS-2014-1, acting as conductor of the session.
Mr. Nick Ferguson from Trust-IT Services and coordinator of the EC-funded CloudWATCH2 project, will set the stage with a presentation focussing on the key challenges related to the cloud. These challenges will be later presented by recognized researchers in the field to discuss where the trends are moving. Finally, a questions and answers slot is offered to interact with the audience about the proposed topics.
Presentation 1: “Cloud challenges to high-demanding privacy scenarios.” Abstract: “Distributed environments, in particular cloud ones, are generally perceived as being untrusted for storing sensitive personal data. Unless specific data protection measures are implemented, Cloud Providers and malicious parties could gain access to such data and make an unlawful use of them, beyond the specific context of explicitly authorized purposes. In case of scenarios with high-demanding privacy needs (such as eHealth or financial data), moving operations to the cloud requires the provisioning of strict guarantees to all involved parties, in full compliance with the law and according to state-of-the-art technology and best privacy-by-design and cloud security practices. In this talk some of these privacy challenges will be presented, as well as some approaches to overcome them.”
Speaker: Nicolas Notario McDonnell (Atos). Project WITDOM.
Presentation 2: “Verifiability and Authenticity of Data and Beyond” Abstract: “In this talk we discuss aspects related to reliably checking that third party infrastructure (i.e., the cloud) behaves as expected when storing and processing data. The focus is on cryptographic measures that ensure and sometimes even enforce honest behaviour and at least allow cryptographically holding the cloud accountable when it deviates from the expected behaviour.”
Speaker: Mr. Henrich C. Pöhls (Passau University). Project PRISMACLOUD.
Presentation 3: “Data protection versus storage efficiency and multi-tenancy” Abstract: “Implementing existing end-to-end security solutions unfortunately may reduce the advantages of the cloud technology such as cost effective storage. We will talk about the challenges resulting from the combination of security, functional and non- functional requirements such as storage efficiency and multi-tenancy.”
Speaker: Ghassan Karame (NEC). Project TREDISEC.
D7.2. Dissemination Plan
This document sets out the initial plan to raise awareness of the TREDISEC project concepts and solutions. In this plan, we only outline the scientific dissemination activities whereas all remaining communication activities are described in deliverable D7.3 “Communication Strategy and Plan”.
This document sets out the initial plan to raise awareness of the TREDISEC project concepts and solutions. In this plan, we only outline the scientific dissemination activities whereas all remaining communication activities are described in deliverable D7.3 “Communication Strategy and Plan”.
The dissemination strategy defines the way TREDISEC will engage different dissemination groups including academic researchers and business stakeholders. These target communities will be reached through appropriate channels throughout the lifetime of the project. Once target communities are built the plan distinguishes between means for maintaining general awareness with respect to the project concept and research results and those means for engaging business communities towards the third year of the project.
In order to maximize the visibility of the project outcomes, a list of potential activities and partners‘ already planned activities are finally detailed in the document.
D7.3. Communication Strategy and Plan
In the framework of the Horizon 2020 Program, the EU proposes and demands that supported research projects reach society, promoting their value and the benefits derived from the technological and scientific activity through public funding returned to society.
In the framework of the Horizon 2020 Program, the EU proposes and demands that supported research projects reach society, promoting their value and the benefits derived from the technological and scientific activity through public funding returned to society.
In short, communication and dissemination have become a key asset for their strategy research projects. It is necessary to show how European innovation and research projects are contributing to an “innovative European Union”, and at the same time, which type of projects are funded and what the results of such an investment are.
The main purpose of deliverable D7.3 is to describe the Communication Strategy of TREDISEC, to give visibility to the entire process and evolution of the project, as well as to its major achievements.
This report includes a social and economic context about cloud security, and breaks down the pursued objectives, defining a set of indicators to measure the grade of achievement.
Following, the document identifies the target groups, and defines the key messages that will be the foundation and guidelines of each communication action. These key messages are exposed from different points of view. Once our audience and the dedicated key messages are defined the deliverable describes the tools we are going to use to disseminate the selected messages among the audience.
Finally, an action plan is shown, with specific actions for each year of the project defined according to the stated objectives. The proposed schedule is aligned with the Horizon 2020 guidance.
D2.1. Description of the Context Scenarios and Use Cases Definition
(This document is confidential and thus, is not available for downloading)
TREDISEC will advance the state of the art in cloud computing by adding security-oriented features on cloud services. To describe these features and to arrive at a set of requirements for them, the TREDISEC end-users as well as the industrial partners developed a set of representative use cases following an iterative process that included multiple rounds of review and refinement phases. These use cases outline typical usage scenarios where end-to-end security is required, and serve as a basis to derive functional and non-functional requirements. This document provides a snapshot of the use cases in the first six months of the project.
(This document is confidential and thus, is not available for downloading)
TREDISEC will advance the state of the art in cloud computing by adding security-oriented features on cloud services. To describe these features and to arrive at a set of requirements for them, the TREDISEC end-users as well as the industrial partners developed a set of representative use cases following an iterative process that included multiple rounds of review and refinement phases. These use cases outline typical usage scenarios where end-to-end security is required, and serve as a basis to derive functional and non-functional requirements. This document provides a snapshot of the use cases in the first six months of the project.
The final set of use cases consists of 6 scenarios driven by four partners of the consortium and covers the full spectrum of technologies to be provided by TREDISEC. More specifically:
Use Case 1: Enhance Storage Efficiency Securely. (GRNET)
Use Case 2: Multi-Tenancy and Access Control. (GRNET)
Use Case 3: Optimised WebDav service for confidential storage. (ARSYS)
Use Case 4: Enforcement of Biometric-based Access Control. (MORPHO)
Use Case 5: Secure Upgrade of Biometric Systems. (MORPHO)
Use Case 6: Database Migration into a Secure Cloud. (SAP)
Finally, through the mentoring phase of the use case definition process, each use case has been associated with the relevant technologies that will be developed in the course of the project. This practically translates to a mapping of use cases to Work Packages and Tasks therein in order to concretize the security-specific challenges and the way to address them.
This deliverable is Confidential: only for members of the consortium (including the Commission Services)
Soon 1st TREDISEC General Assembly
The first project General Assembly will take place at Eurecom's premises in Sophia-Antipolis (France), on the 19th and 20th of November. This face-to-face meeting will gather Tredisec partners to report on the progress of the different work-packages, discuss upcoming tasks and deliverables, and revisit the overall management, innovation and communication strategy of the project.
The first project General Assembly will take place at Eurecom's premises in Sophia-Antipolis (France), on the 19th and 20th of November. This face-to-face meeting will gather Tredisec partners to report on the progress of the different work-packages, discuss upcoming tasks and deliverables, and revisit the overall management, innovation and communication strategy of the project.
Transparent Data Deduplication in the Cloud
Publication related to WP4.
Abstract
Cloud storage providers such as Dropbox and Google drive heavily rely on data deduplication to save storage costs by only storing one copy of each uploaded file. Although recent studies report that whole file deduplication can achieve up to 50% storage reduction, users do not directly benefit from these savings—as there is no transparent relation between effective storage costs and the prices offered to the users.
In this paper, we propose a novel storage solution, ClearBox, which allows a storage service provider to transparently attest to its customers the deduplication patterns of the (encrypted) data that it is storing. By doing so, ClearBox enables cloud users to verify the effective storage space that their data is occupying in the cloud, and consequently to check whether they qualify for benefits such as price reductions, etc. ClearBox is secure against malicious users and a rational storage provider, and ensures that files can only be accessed by their legitimate owners. We evaluate a prototype implementation of ClearBox using both Amazon S3 and Dropbox as back-end cloud storage. Our findings show that our solution works with the APIs provided by existing service providers without any modifications and achieves comparable performance to existing solutions.
Logical Partitions on Many-Core Platforms
The paper is related to the work conducted by ETH in work package 4.
The paper is related to the work conducted by ETH in work package 4.
Abstract
Cloud platforms that use logical partitions to allocate dedicated resources to VMs can benefit from small and therefore secure hypervisors. Many-core platforms, with their abundant resources, are an attractive basis to create and deploy logical partitions on a large scale. However, many-core platforms are designed for efficient cross-core data sharing rather than isolation, which is a key requirement for logical partitions. Typically, logical partitions leverage hardware virtualization extensions that require complex CPU core enhancements. These extensions are not optimal for many-core platforms, where it is preferable to keep the cores as simple as possible.
In this paper, we show that a simple address-space isolation mechanism, that can be implemented in the Network-on-Chip of the many-core processor, is sufficient to enable logical partitions. We implement the proposed change for the Intel Single-Chip Cloud Computer (SCC). We also design a cloud architecture that relies on a small and disengaged hypervisor for the security-enhanced Intel SCC. Our prototype hypervisor is 3.4K LOC which is comparable to the smallest hypervisors available today. Furthermore, virtual machines execute bare-metal avoiding runtime interaction with the hypervisor and virtualization overhead.
PerfectDedup: Secure Data Deduplication
Paper entitled "PerfectDedup: Secure Data Deduplication", by TREDISEC's partner EURECOM, will be published in DPM 2015.
The paper is co-authored by Melek Önen and Refik Molva from Eurecom, and is related to their contribution to work package 4, task T4.3 "Data Confidentiality and Deduplication".
The paper was presented at the 10th International Workshop on Data Privacy Management (DPM 2015), which will took place in Vienna (Austria) on September 21-22, 2015.
Publication related to WP4.
Abstract
With the continuous increase of cloud storage adopters, data deduplication has become a necessity for cloud providers. By storing a unique copy of duplicate data, cloud providers greatly reduce their storage and data transfer costs. Unfortunately, deduplication introduces a number of new security challenges. We propose PerfectDedup, a novel scheme for secure data deduplication, which takes into account the popularity of the data segments and leverages the properties of Perfect Hashing in order to assure block-level deduplication and data condentiality at the same time. We show that the client-side overhead is minimal and the main computational load is outsourced to the cloud storage provider.
Some Applications of Verifiable Computation to Biometric Verification
Publication related to WP3
Abstract
Publication related to WP3
Abstract
Spurred by the advent of cloud computing, the domain of verifiable computations has known significant progress in recent years. Verifiable computation techniques enable a client to safely outsource its computations to a remote server. This server performs the calculations and generates a proof asserting their correctness. The client thereafter simply checks the proof to convince itself of the correctness of the output. In this paper, we study how recent advances in cryptographic techniques in this very domain can be applied to biometric verification.
Publication in XLAB blog: "ICT 2015 networking session: Privacy/security in untrusted environments"
ICT 2015 networking session: Privacy/security in untrusted environments
The H2020 projects WITDOM, TREDISEC and PRISMACLOUD organize a joint networking session at the ICT 2015 – Innovate, Connect, Transform event, held on October 22nd 2015 at 14:50 CET in Room 8 of Centro de Congressos de Lisboa, Lisbon (Portugal). The joint networking session will discuss challenges to both security and end-users’ privacy when outsourcing data to untrusted environments, such as privacy protection, integrity, data storage efficiency or multi-tenancy.
ICT 2015 networking session: Privacy/security in untrusted environments
The H2020 projects WITDOM, TREDISEC and PRISMACLOUD organize a joint networking session at the ICT 2015 – Innovate, Connect, Transform event, held on October 22nd 2015 at 14:50 CET in Room 8 of Centro de Congressos de Lisboa, Lisbon (Portugal). The joint networking session will discuss challenges to both security and end-users’ privacy when outsourcing data to untrusted environments, such as privacy protection, integrity, data storage efficiency or multi-tenancy.
Use Case 1: Enhance Storage Efficiency Securely
To provide secure cloud services to its users, a cloud provider needs to cater for both end-to-end security, which covers the transfer of data between the user and the cloud provider, and data-at-rest security, which concerns the security of the data stored in the cloud provider premises. Encryption can provide proven solutions to both. At the same time, storage efficiency in multi-tenancy environments can be enhanced by mechanisms such as deduplication, which reduces the storage cost by allowing users to share same pieces of data, without the users being aware of the underlying mechanisms. Ideally we would like to combine encryption with multi-tenancy; however, well encrypted data will not exhibit any common pieces that can be leveraged by deduplication mechanisms. Therefore, GRNET requires a mechanism which provides data security features while supporting multi-tenancy in the cloud. The process must be simple but also secure enough for the end-user so that cloud use becomes both practical and trustworthy.
Partner: GRNET
Overview
To provide secure cloud services to its users, a cloud provider needs to cater for both end-to-end security, which covers the transfer of data between the user and the cloud provider, and data-at-rest security, which concerns the security of the data stored in the cloud provider premises. Encryption can provide proven solutions to both. At the same time, storage efficiency in multi-tenancy environments can be enhanced by mechanisms such as deduplication, which reduces the storage cost by allowing users to share same pieces of data, without the users being aware of the underlying mechanisms. Ideally we would like to combine encryption with multi-tenancy; however, well encrypted data will not exhibit any common pieces that can be leveraged by deduplication mechanisms. Therefore, GRNET requires a mechanism which provides data security features while supporting multi-tenancy in the cloud. The process must be simple but also secure enough for the end-user so that cloud use becomes both practical and trustworthy.
Business Context
GRNET operates, at a production level, a fully functional cloud Infrastructure as a Service (IaaS) called ~okeanos. The ~okeanos service offers both computing and storage resources on demand to thousands of users. As the number of users increase, in tandem with their needs, it becomes imperative to handle resources such as storage more efficiently. An obvious solution is to adopt deduplication techniques for all kinds of data. However, this entails computational cost; moreover, it is not yet clear how to couple deduplication with increased security guarantees, such as those offered by strong cryptography, in a multi-tenant environment.
Technology Context
GRNET offers its cloud services via the open source Synnefo cloud management stack, developed by GRNET. Interaction with Synnefo is done through a well-defined, OpenStack-compatible API. Unfortunately, deduplication and encryption are currently beyond the scope of OpenStack. For online file storage (as opposed to block or volume storage), GRNET already offers deduplication for files, using content-addressable storage. However, there is, as of yet, no fully-fledged deduplication solution for all kinds of data; additionally, there is currently no solution offering the combination of deduplication with end-to-end and data-at-rest security.
Expected Outcomes and Contribution of TREDISEC
GRNET would like to achieve stronger data isolation to enforce the access control against unauthorized physical data access in the cloud scenario. As with Use Case 1, GRNET expects to combine its own engineering strengths with the research excellence of the TREDISEC partners so that novel encryption in multi-tenancy solutions can be brought into a production environment.
Use Case 2: Multi-Tenancy and Access Control
A tenant, that is, a team of collaborators working on shared infrastructure resources, needs to acquire resources from a cloud provider and to perform authorised actions on them. GRNET is motivated to implement a mechanism to support such functions as tenant creation and modification, resource requesting and granting and policy definition and enforcement for actions on resources. All these must be implemented in a way that resource allocation is not cumbersome. It is also necessary that enforcing policies of permitted actions on each resource should be efficient and unobtrusive. Moreover, data confidentiality should be guaranteed even in the case where attackers (including malicious users) are able to bypass access control mechanisms and access directly the data stored in the cloud. In case that data-at-rest encryption is applied, the challenge is to support file-sharing capabilities on the encrypted data across multiple users/tenants.
Partner: GRNET
Overview
A tenant, that is, a team of collaborators working on shared infrastructure resources, needs to acquire resources from a cloud provider and to perform authorised actions on them. GRNET is motivated to implement a mechanism to support such functions as tenant creation and modification, resource requesting and granting and policy definition and enforcement for actions on resources. All these must be implemented in a way that resource allocation is not cumbersome. It is also necessary that enforcing policies of permitted actions on each resource should be efficient and unobtrusive. Moreover, data confidentiality should be guaranteed even in the case where attackers (including malicious users) are able to bypass access control mechanisms and access directly the data stored in the cloud. In case that data-at-rest encryption is applied, the challenge is to support file-sharing capabilities on the encrypted data across multiple users/tenants.
Business Context
The ~okeanos Infrastructure as a Service provided by GRNET offers file sharing capabilities, but these are achieved by simply granting access permissions to specific users that collaborate. Therefore, resource sharing is limited to projects, with users as members. Meanwhile, there is currently no provision for end-to-end or data-at-rest encryption in the cloud service. GRNET wishes to explore such solutions for the next generation of its cloud services.
Technology Context
The solution currently adopted by GRNET for access control in a multi-tenancy environment is based on the notion of projects, through which resources are allocated to users. The Synnefo cloud management software developed by GRNET enforces resource isolation. However, this leaves open the possibility that attackers could bypass the Synnefo access control mechanism. The problem could be mitigated by leveraging encryption, guaranteeing confidentiality of data-at-rest, with sharing capabilities among different tenants.
Expected Outcomes and Contribution of TREDISEC
GRNET would like to achieve stronger data isolation to enforce the access control against unauthorized physical data access in the cloud scenario. As with Use Case 1, GRNET expects to combine its own engineering strengths with the research excellence of the TREDISEC partners so that novel encryption in multi-tenancy solutions can be brought into a production environment.
Use Case 3: Optimised WebDav service for confidential storage
As many people share data over the Internet and WebDav is one of the most popular protocols to access shared storage services, the target of this use case is to provide equivalent level of service to those users who would like to store and share the encrypted version of their sensitive data. Thus, sharing access among multiple users to co-browse and co-edit the files (share resources in a multi-tenant setting), and ensuring confidentiality via encryption while keeping performance of the service in terms of storage efficiency by applying technologies such as data deduplication on encrypted files will be the core challenge of this use case.
Partner: ARSYS
Overview
As many people share data over the Internet and WebDav is one of the most popular protocols to access shared storage services, the target of this use case is to provide equivalent level of service to those users who would like to store and share the encrypted version of their sensitive data. Thus, sharing access among multiple users to co-browse and co-edit the files (share resources in a multi-tenant setting), and ensuring confidentiality via encryption while keeping performance of the service in terms of storage efficiency by applying technologies such as data deduplication on encrypted files will be the core challenge of this use case.
Business Context
As a differentiator from all the different cloud storage business solutions offered in the international markets, the target of this use case is to provide Arsys cloud storage service with multi-tenancy access control and end-to-end data encryption, without compromising service efficiency and performance. Having these characteristics would provide a clear advantage against the rest of competitor solutions. In this use case, for the shared storage service, Arsys will incorporate TREDISEC multi-tenancy access control, data encryption and storage efficiency to its WebDav access service.
Technology Context
Currently Arsys uses GlusterFS as Cloud Storage, with four nodes constituting a storage cluster. All components of the storage cluster (servers, network connections and files) are redundant in order to avoid any single point of failure and to minimize downtime. Customers access their files through WebDav protocol over HTTPS, thus all information accessed over the Internet is encrypted. However, the storage cluster supports neither data encryption nor deduplication over encrypted data.
Expected Outcomes and Contribution of TREDISEC
The motivation of this use case is built upon three pillars: (i) enabling customers with multiple tenants to manage access control and share resources (this includes tenants with more than one user, where users have different permissions); (ii) enabling encryption to guarantee data confidentiality for the cloud storage; (iii) enhancing cloud storage efficiency over duplicated data. It is challenging to satisfy all three requirements at the same time. ARSYS expects that the outcome of TREDISEC will help to build a complete solution.
Use Case 4: Enforcement of Biometric-based Access Control
The use case describes the authentication of a user by some service provider, and assumes that the authentication process contains a biometric comparison (also called biometric matching). The use case assumes, moreover, that the service provider delegates the biometric matching to some dedicated server, called "cloud authentication server". In addition to the result of the authentication, the cloud authentication server supplies a proof that the biometric matching was correctly performed. This proof is enabled by the use of verifiability techniques and is at the core of this use-case.
Partner: MORPHO
Overview
The use case describes the authentication of a user by some service provider, and assumes that the authentication process contains a biometric comparison (also called biometric matching). The use case assumes, moreover, that the service provider delegates the biometric matching to some dedicated server, called "cloud authentication server". In addition to the result of the authentication, the cloud authentication server supplies a proof that the biometric matching was correctly performed. This proof is enabled by the use of verifiability techniques and is at the core of this use-case.
Business Context
The demands for user authentication services have grown at the same time as digital services slot into everyday life. Conventionally, each agency, company and online service manages its own user database. As a result, the user management becomes tricky, particularly for users. Several paradigms appear to facilitate the authentication, such as Single-Sign-On or Identity Federation. The authentication of users is itself seen as a service and might be delegated from the service provider to dedicated entities. From another perspective, the approaches for authentication of users involve more and more biometric data. However, the outsourcing of processes that make use of biometric data raises privacy concerns and is thus generally avoided. The use of verifiability techniques would reinforce the confidence of outsourcing the authentication service in general and the processing over biometric data in particular to an external server.
Technology Context
Current models for the delegation of authentication are described in several standards, as OpenID and SAML. They involve three types of participants: users, service providers and identity providers. Such models are the basis for the use-case, with the difference that the management of the users is taken on by the service providers in the use case. The identity providers are called here cloud authentication servers, to which the service providers delegate the biometric matching. Their role is to manage the biometric algorithms, so that the service providers do not have to care about these technologies. A cloud authentication server may be provided as a SaaS on a private cloud or public cloud if biometric data are not encrypted, and in a public cloud if biometric data are encrypted.
Expected Outcomes and Contribution of TREDISEC
MORPHO plans to outsource its biometrics-based user authentication service to a cloud authentication server. Without full trust, MORPHO is looking for security primitives that can provide verifiable proofs for each authentication result, in order to audit the operations on the cloud authentication provider. The main contribution of TREDISEC to this use case is the design of the primitives that enable the verifiable processing of biometric data. A critical issue here is to achieve verifiability while being compatible with the comparison of two biometric data objects, as included in the authentication step. Additionally, the contribution of TREDISEC on processing over encrypted data would enforce the privacy of the biometric data.
Use Case 5: Secure Upgrade of Biometric Systems
The use case describes the outsourcing of major updates on biometric databases. This use case is typical of biometric systems. A major accuracy update requires reprocessing the raw images to enable the new algorithms. This process usually takes time (e.g., several months) and requires in-house hardware. Thus, the perspective delegation of such computations to the cloud seems appealing. However, for privacy reasons, outsourced biometric data should be encrypted. Therefore, the use-case raises the issue of applying update algorithms over encrypted biometric data.
Partner: MORPHO
Overview
The use case describes the outsourcing of major updates on biometric databases. This use case is typical of biometric systems. A major accuracy update requires reprocessing the raw images to enable the new algorithms. This process usually takes time (e.g., several months) and requires in-house hardware. Thus, the perspective delegation of such computations to the cloud seems appealing. However, for privacy reasons, outsourced biometric data should be encrypted. Therefore, the use-case raises the issue of applying update algorithms over encrypted biometric data.
Business Context
The storage of biometric data is at the core of biometric systems. Managing big identity databases composed of millions of records is by itself cumbersome, but a lot of technical and privacy concerns are added when databases store biometric data. In particular, privacy concerns often preclude outsourcing computations on biometric data. The current practice is to not outsource the computation over biometric data at all. On the other side, encrypting biometric data supplies data privacy, but precludes the ability to compute over the data. As a result, efficient solutions for delegating processing over encrypted biometric data would supply clear advantages over current solutions.
Technology Context
Algorithms using biometric data regularly evolve, as well as the formats under which the biometric data are stored. As a result, biometric systems sometimes need major upgrades, meaning that the stored biometric data must be processed in order to be compatible with the new formats and algorithms. Biometric data cannot be processed by an external cloud system for privacy reasons. As a result, the current solution prohibits the outsourcing of system upgrades. The use-case introduces a model that allows outsourcing the processing of biometrics data. According to this model, a pre/post-processing entity is deployed in a private cloud environment and a cloud update server is deployed with a public cloud provider. The pre/post-processing entity lies between the biometric database and the cloud, ensuring first the outsourcing of the system upgrades, and then the integration of the result supplied by the cloud.
Expected Outcomes and Contribution of TREDISEC
In order to outsource the computation over sensitive biometric data, MORPHO expects that the outcome of TREDISEC will provide privacy-preserving primitives for processing biometric data. More specifically, the encryption primitives should be compatible (and efficient) with the signal processing operations to be carried out on the raw biometric images. Additionally, if the cloud server were able to prove that the process over encrypted biometric data has been correctly performed, it would ensure that the biometric data are correctly updated once integrated in the biometric system. From a business perspective, such solutions provided, built upon the technologies brought by TREDISEC, would significantly decrease the overall time and cost of biometric systems upgrades.
Use Case 6: Database Migration into a Secure Cloud
This use case describes the migration of a company’s legacy data into a secure cloud environment. This can be assumed on the background of a mid-sized company who wants to move from an on-premise ERP solution to a cloud solution. However, none of the sensitive information within their data should be accessible in clear outside of the data owners’ company. Hence, encryption of the data is required.
Encrypting legacy data, which can easily contain multiple gigabytes of data, could take several months. Hence, a migration process needs significant computational resources, e.g. an on-premise cluster environment, to speed up the process. Moreover, the encrypted data should be optimized for storage space, adapted to the data owner’s sensitivity requirements and achieve optimal performance in later queries executed on top of it. Furthermore, the data should be stored within the cloud provider’s database in such a way that multi-tenancy can be realized for the cloud provider’s cost benefit while assuring that no other tenant has access to the data owners’ data.
Partner: SAP
Overview
This use case describes the migration of a company’s legacy data into a secure cloud environment. This can be assumed on the background of a mid-sized company who wants to move from an on-premise ERP solution to a cloud solution. However, none of the sensitive information within their data should be accessible in clear outside of the data owners’ company. Hence, encryption of the data is required.
Encrypting legacy data, which can easily contain multiple gigabytes of data, could take several months. Hence, a migration process needs significant computational resources, e.g. an on-premise cluster environment, to speed up the process. Moreover, the encrypted data should be optimized for storage space, adapted to the data owner’s sensitivity requirements and achieve optimal performance in later queries executed on top of it. Furthermore, the data should be stored within the cloud provider’s database in such a way that multi-tenancy can be realized for the cloud provider’s cost benefit while assuring that no other tenant has access to the data owners’ data.
Business Context
Small, midsized and large enterprises more and more tend to use functionality provided by cloud services. This may include ERP, CRM, HR and other business applications providing solutions to process day-to-day business scenarios as well as simple data storage solutions with hosted databases. Using such hosted functionality requires outsourcing the company’s data – including highly sensitive data – to a cloud provider. Storing sensitive data outside of a company’s own premises exposes the data to the risk of being misused. For instance, an honest but curious database administrator working with the cloud provider can easily access any stored data. Obviously, it is in the company’s best interest to store its data in a secure way (i.e., encrypted). This usually comes with the restriction that no application can process the data if it is not decrypted beforehand. There are, however, solutions such as CryptDB which enable to store encrypted data while maintaining the possibility of executing SQL statements directly on the encrypted data. Utilizing such a solution requires that all legacy data– which can easily go into terabytes – undergo encryption before being transferred and stored at the cloud provider. This can take weeks to months, which leads to a potential downtime. Alternatively, the encryption of a fixed database state can be done in parallel to the day to day business, but this requires a complex update of the encrypted data afterwards. Hence, an optimized solution for a secure migration of large data sets of business data into a secure cloud is required.
Technology Context
The encryption service should be provided as Software as a Service (SaaS, e.g., at a private cloud). A company should then use the encryption service to encrypt its legacy data for later storing it with the cloud provider. The encryption should be based on a framework similar to CryptDB which allows encrypting data in a database in such a way, that the execution of SQL statements is still possible over the encrypted data.
A specific JDBC driver may be used for connecting to a database containing encrypted data (see Figure). The goal of the driver is to realize accessing encrypted data transparently from the client application, which means a large set of regular SQL queries can be used to search over encrypted data. The encryption service is required for the initial setup of the sensitive data.
Expected Outcomes and Contribution of TREDISEC
On-premise applications migrated into the cloud (e.g., moving an ERP solution into the cloud with a cloud provider) may contain sensitive data. Data encryption, however, limits the functionalities of those applications such that it is required to look for processing techniques for encrypted data that is suitable for database queries. Therefore, the outcome of TREDISEC is a concept for a service that facilitates the optimized encryption of large data sets using different encryption schemes while at the same time maintaining query processing over this encrypted data.
Welcome to the TREDISEC blog!
We are living a fascinating moment in what concerns Security in the Cloud, and we would like to share it with you.

Welcome to our Blog!
We are living a fascinating moment in what concerns Security in the Cloud, and we would like to share it with you.
There are plenty of evidence of how Cloud market will grow in 5 years, becoming a very important source of employment and economical benefits for the citizens and businesses across the European Union. But at the same rate that Cloud market grows, the challenges for Security and Privacy increase as well.
TREDISEC is a three years duration project, co-funded by the European Comission as part of the Horizon 2020 EU Research and Innovation programme. The project started on April 2015 and its main purpose is to provide an innovative solution that addresses an ambitious research challenge: let functional Cloud requirements (e.g. efficient storage, multi-tenancy) and non-functional security requirements (confidentiality, integrity, access control) live together in a secure and trustworthy Cloud ecosystem.
From this window to outside, we would like to keep you updated about TREDISEC advances.
But this is not only about us.
We will talk about other research initiatives and FP7/H2020 projects with similar and complementary goals too.
If you are interested in Cloud Security, we aim to be a space to look at as the new advances happen.
We have a long way in front of us... and we invite you to walk alongside us :)
TREDISEC at ICT 2015
Atos organized a networking session in the ICT 2015, supported by H2020 projects: WITDOM, TREDISEC, and PRISMACLOUD.
ICT is one of the most relevant events organized by European Commission to promote projects financed by european funds, that research last technological advances beyond the state-of-the-art.
Atos organized a networking session in the ICT 2015, supported by H2020 projects: WITDOM, TREDISEC, and PRISMACLOUD.
ICT is one of the most relevant events organized by European Commission to promote projects financed by european funds, that research last technological advances beyond the state-of-the-art.
The session entitled "Key challenges in end-to-end privacy/security in untrusted environments" was chaired by Silvana Muscella, CEO & founder of Trust-IT, with a recognized career in ICT communication & business.
Next, Nicolas Notario, member of WITDOM project, Henrich Pohls (PRISMACLOUD project) and Ghassan Karame (TREDISEC project), took over the session, and each of them showed a specific challenge about cloud security.
The further discussion with the audience unveiled audience concerns. It was remarkable the animated discussions about informed consent, and the ethic conflicts derived from the legal loopholes associated to them.
No doubt that privacy and security in cloud will be fully a main topic of debate next years, due to the growth of cloud services, and resulting consequences in data protection.

For more details, here you have the presentations given by the speakers:
What is TREDISEC?
TREDISEC is a Research and Innovation Action co-funded by the European Commission under the Horizon 2020 programme.
More details here.
TREDISEC is a Research and Innovation Action co-funded by the European Commission under the Horizon 2020 programme.
More details here.
What objectives does TREDISEC pursue?
TREDISEC aims at contributing to enhance the security and privacy of existing cloud technologies, while keeping efficiency and cost levels stable.
Currently, when we introduce an improvement in the security of cloud storage, it comes at the expense of higher costs, and lower efficiency results.
TREDISEC encompasses both functional and non-functional demands, and has the ambition to design mechanisms that satisfy security and functional requirements at comparable level.
TREDISEC aims at contributing to enhance the security and privacy of existing cloud technologies, while keeping efficiency and cost levels stable.
Currently, when we introduce an improvement in the security of cloud storage, it comes at the expense of higher costs, and lower efficiency results.
TREDISEC encompasses both functional and non-functional demands, and has the ambition to design mechanisms that satisfy security and functional requirements at comparable level.
Which organizations participate in TREDISEC and what is their specific role in the project?
The project is coordinated by Atos, an international firm leader in digital services, leading a consoritum of European representatives from industry and research institutes from different countries (NEC in UK, Eurecom in France, GRNET in Greece, Arsys in Spain, IBM in Switzerland, SAP SE in Germany and Morpho in France).
The project is coordinated by Atos, an international firm leader in digital services, leading a consoritum of European representatives from industry and research institutes from different countries (NEC in UK, Eurecom in France, GRNET in Greece, Arsys in Spain, IBM in Switzerland, SAP SE in Germany and Morpho in France).
The different TREDISEC consortium partners contribute as providers of Cloud services, solutions and infrastructures, participating in the research and development of beyond the state-of-the-art technologies and solutions, in the field of secure and trust-worthy ICT.
What is the budget of the project?
TREDISEC has a budget of around 6,5 million €, out of that the European Commission funds around 4,4 million €. The budget is used mainly to cover personnel costs of the entities that take part in the project, to conduct research, develop new innovative technologies, and test them in realistic evaluation scenarios.
TREDISEC has a budget of around 6,5 million €, out of that the European Commission funds around 4,4 million €. The budget is used mainly to cover personnel costs of the entities that take part in the project, to conduct research, develop new innovative technologies, and test them in realistic evaluation scenarios. But also, to manage the project, foster collaboration and regularly disseminate the achievements, and make a technology transfer assessment that turns into a realistic strategy for the exploitation of results, either individually or jointly, by the members of the consortium.
Which is the expected duration of the project?
TREDISEC is a 3 years duration project (36 months). The project started on April 1st 2015 and is planned to finish by March 30th 2018.
TREDISEC is a 3 years duration project (36 months). The project started on April 1st 2015 and is planned to finish by March 30th 2018.
What is the innovation potential of the security technologies that TREDISEC will develop?
Most existing solutions are not suitable for the market because they either provide security at the expense of the economy of scale and cost effectiveness of the cloud (e.g. data is encrypted before being outsourced, which prevents any computation to be performed in the cloud), or they meet the latter objectives at the expense of security (e.g., data deduplication and compression optimally use the resources of the cloud provider but require the customer to blindly trust its cloud provider).
Most existing solutions are not suitable for the market because they either provide security at the expense of the economy of scale and cost effectiveness of the cloud (e.g. data is encrypted before being outsourced, which prevents any computation to be performed in the cloud), or they meet the latter objectives at the expense of security (e.g., data deduplication and compression optimally use the resources of the cloud provider but require the customer to blindly trust its cloud provider).
The main aim of TREDISEC is to bridge this gap by developing tools and systems to address these shortcomings and to enhance the confidentiality and integrity of data outsourced to the cloud without affecting functionality, and storage efficiency.
From a practical standpoint, the ambition of this project is to develop systems and techniques that make the cloud a secure and efficient place to store data. We plan to step away from a myriad of disconnected security protocols or cryptographic algorithms, and to converge instead on a (possibly standardized) single framework where all objectives are met to the highest extent possible.
What is the project work-plan?
The work plan is organised in 7 work packages, whose interdependencies and relations are depicted in the project structure section.
The work plan is organised in 7 work packages, whose interdependencies and relations are depicted in the project structure section.
Which are the major milestones of TREDISEC?
- MS1: Use cases and scenario context definition due at M6 (September 2015).
- MS2: Consolidated requirements and architectural models due at M12 (March 2016).
- MS3: Design of the security primitives and framework due at M20 (November 2016).
- MS4: Implementation of the security primitives and the framework due at M30 (September 2017).
- MS1: Use cases and scenario context definition due at M6 (September 2015).
- MS2: Consolidated requirements and architectural models due at M12 (March 2016).
- MS3: Design of the security primitives and framework due at M20 (November 2016).
- MS4: Implementation of the security primitives and the framework due at M30 (September 2017).
- MS5: Deployment of the Use Cases Evaluation environment due at M33 (December 2017).
- MS6: Final evaluation of TREDISEC due at M36 (March 2018).
How TREDISEC fits into EU's Strategy for a European Digital Single Market?
The European Commission states that businesses and consumers still do not feel confident enough to adopt cross-border cloud services for storing or processing data, because of concerns relating to security, compliance with fundamental rights and data protection more generally.
The European Commission states that businesses and consumers still do not feel confident enough to adopt cross-border cloud services for storing or processing data, because of concerns relating to security, compliance with fundamental rights and data protection more generally.
Improving security and privacy features contributes to increase the differentiating character, placing on the market services and solutions (new or improved) that are aligned with the european directives of security and privacy. On the other hand, offering services and solutions more secure, users can gain greater control over their data, increasing trust in IT technologies and services online.
Another priority objectives of the EU, is to protect our networks and critical infrastruture and respond effectively to cyber-threats, and have adopted both national and EU-level cybersecurity strategies and regulation.
Currently, cloud infrastructures services are increansingly the more extended option by most business of the EU, therefore protect them against cyber attacks is critical.
Clearly, the development of technologies that allow cloud infrastructures to be stronger and qualified against possible attacks, with a major ability to recover quicker and better after a cyber security incident, results in enhancing security and business preparedness.
Cloud Computing for EU is one of the main competitiveness drivers for all enterprises in EU, independently of its size and sector, therefore it has become one of the main priorities in the European Digital Agenda, and main player in numerous initiatives started up within the member countries.
TREDISEC at the ICT 2015 event: a newby's experience

The beautiful and sunny city of Lisbon hosted the past 20, 21 and 22 of October , the ICT (Innovate, Connect and Transform) 2015 event, organised by the European Commission, aiming at reaching all representatives of research, politics, industry, start-ups, investors, academia involved in ICT topics.
The event attracted more than 5,700 participants to the Centro de Congressos de Lisboa and TREDISEC was present there too. Our first impression was a really efficiently organised registration process, which facilitated attendants entering the venue with a big smile on face.
Diverse parallel activities were schedulled, structured along the following tracks:
- A main plennary conference presenting the new European Commission’s policies and initiatives on Research & Innovation in ICT (Horizon 2020 Programme); followed by multiple Work Programme 2016-2017 thematic sessions with detailed information on the funding opportunities in ICT sector;
- a huge interactive exhibition showcasing the best results and impact of most recent European ICT Research & Innovation;
- the Startup Europe Forum, featuring EU policy actions for startups and SMEs, innovators, as well as for private and public investors.
- Nicolas Notario from Atos, member of WITDOM project;
- Henrich Pöhls from Passau University, representing PRISMACLOUD project;
- and Ghassan Karame from NEC Europe, our partner from TREDISEC project, who presented one of the project's innovative key points: Data protection versus storage efficiency and multi-tenancy.
Besides these activities, the event facilitated networking opportunities for making Research and Innovation connections, exploiting natural synnergies, and promoting high quality partnership for potential future collaboration.
After two days attending illustrative plennary conferences, panel debates on hot topics, visiting exhibition stands showcasing clearly innovative initiatives and their impressive results, and participating in crowded networking sessions, on the last day of the event TREDISEC had its moment!
The networking session entitled "Key challenges in end-to-end privacy/security in untrusted environments", organised jointly by projects TREDISEC, WITDOM and PRISMACLOUD, was schedulled for the last slot of the last day of the event. That, a priori, was a drawback for us in order to engage as many people as possible. To overcome that, we tried to advertise the session as much as possible during the previous 2 days, distributing flyers of the project and notices with the session agenda and content in every opportunity we had.
Finally, D-Day H-Hour arrived and everything was set for both the chair (Silvana Muscella, CEO & founder of Trust-IT) and the team of presenters:
Despite the bad timeslot allocated for us, during the first 10 minutes we witnessed with great joy how the small room was getting packed and packed :). We had prepared a networking session attendee pack which included one promotional flyer of each of the organizing projects, the agenda and summary of the talks schedulled, and a form for posing questions, including the complete list of means available to contact us. We quickly ran out of them!
The entire session was recorded on video, including the Question and Answer part. Besides the relevant questions made by the chair Silvana to the presenters (e.g. "With WITDOM’s approach, up to what level is necessary the collaboration of the cloud infrastructure provider?", "Why do you think that the model of trusting the cloud service providers is not suitable?" or "What gaps do you plan to fulfil within TREDISEC and where will you make a difference in the innovation?"), the Q/A session turned to be a very dynamic and interesting exchange of positions between the speakers and a really committed audience.
Both the complete video recording and the individual presentations can be downloaded from here.
All in all, the session was a great success and we feel very happy with the results of this activity. It not only served as a way to promote the TREDISEC project in general and its inherent innovative character, but also help us finding more about the approaches adopted by others in dealing with related (even similar) security challenges in the Cloud. In the end, making the Cloud a secure and efficient place to store and process data is a joint effort, and it will surely have an impact in existing businesses and generate new profitable business opportunities.

Gallery












TREDISEC will be presented in CODEMOTION
The next 27th of November, Olof Sandstrom, operations manager of Arsys, will give a talk about Public Cloud Security, focusing on unveiling miths that often are linked to cloud security.
Since cloud solutions started to expand among industry, security has been one of the more common constraints to hinder cloud adoption.
Mr. Sandstrom will talk about cloud security since this point of view, and will present TREDISEC as an example of research in cloud security within state-of-the-art projects.
The next 27th of November, Olof Sandstrom, operations manager of Arsys, will give a talk about Public Cloud Security, focusing on unveiling miths that often are linked to cloud security.
Since cloud solutions started to expand among industry, security has been one of the more common constraints to hinder cloud adoption.
Mr. Sandstrom will talk about cloud security since this point of view, and will present TREDISEC as an example of research in cloud security within state-of-the-art projects.
More details about Codemotion event: http://2015.codemotion.es/
Post in arsys blog about Codemotion, and presentation of TREDISEC project
Arsys, partner belonging to the TREDISEC project consortium, has published in its blog a post talking about CODEMOTION, one of the biggest events for developers, set in Madrid.
Arsys, partner belonging to the TREDISEC project consortium, has published in its blog a post talking about CODEMOTION, one of the biggest events for developers, set in Madrid.
TREDISEC project will be presented in this event, introduced by Olof Sandstrom, Arsys Manager Operations, as an EC initiative that has as objective the development of procedures and technological solutions that combine security, efficiency, and technical functionalities, making easier cloud adoption among european enterprises.
Complete Arsys post here (text in spanish): http://www.arsys.info/eventos/codemotion-el-mayor-evento-sobre-programac...
Presentation of TREDISEC project in Cybercamp: towards more trusted and reliable cloud infrastructures
Cybercamp 2015 is the meeting place for young talents, families, entrepreneurs and anyone interested in cybersecurity. It will be held in the BarclayCard Center of the Community of Madrid from 27 to 29 November.
Beatriz Gallego, Atos project leader, will give a talk in Cybercamp next Sunday, 29th November at 10:45 a.m. to explain how TREDISEC can provide security and efficiency in the cloud for public sector, cloud providers or enterprises, including SMEs.
Cybercamp 2015 is the meeting place for young talents, families, entrepreneurs and anyone interested in cybersecurity. It will be held in the BarclayCard Center of the Community of Madrid from 27 to 29 November.
Beatriz Gallego, Atos project leader, will give a talk in Cybercamp next Sunday, 29th November at 10:45 a.m. to explain how TREDISEC can provide security and efficiency in the cloud for public sector, cloud providers or enterprises, including SMEs.

The 1st TREDISEC General Assembly was set in Sophia Antipolis last 19-20 November
On 19th - 20th November TREDISEC consortium got together in order to report on the project status and progress of the managerial/administrative, financial, technical and dissemination/communication activities carried out in the past 8 months.
Attendants presented the work conducted in the different work packages, advances and deliverables submitted.
On 19th - 20th November TREDISEC consortium got together in order to report on the project status and progress of the managerial/administrative, financial, technical and dissemination/communication activities carried out in the past 8 months.
Attendants presented the work conducted in the different work packages, advances and deliverables submitted.


Besides, next steps were decided according to roadmap designed for the project.
The meeting hosted in Sophia Antipolis by EURECOM.

Panos Louridas, from GRNET, has presented TREDISEC project at e-Democracy conference
Hellenic Data Protection Authority and a number of universities. It is intended, similarly to previous occasions, to provide a forum for presenting and debating the latest developments in the field, from a technical, political, and legal point of view.
The conference has included a special session on research conducted within European R&D projects related to e-Democracy and e-Participation, e-Government, Security, Privacy and Trust, e-Crime, e-Fraud and Digital Forensics.
Within this special session, Panos Louridas, from GRNET, one of the companies that takes part of the TREDISEC consortium, has presented TREDISEC project explaining its contribution for Security, Privacy and Trust fields.

The objective of TREDISEC is to develop novel, modular end-to-end security primitives that can be combined in a unified framework to cover the entire spectrum of cloud-relevant security, functional, and non-functional requirements.
TREDISEC plans to step away from a myriad of disconnected security protocols or cryptographic algorithms, and to converge on a single framework where all objectives are met. As result, it will deliver a number of practical security solutions for cloud storage and computations, which makes the cloud a secure and efficient haven for data storage.
More information in the presentation showed in the conference.
D2.2. Requirements analysis and consolidation
The purpose of this deliverable is to explore the various functional and non-functional requirements (including security and privacy requirements) of cloud storage and computation systems and identify not only the most relevant ones but also those which may not be met simultaneously.
Thanks to the specification of the requirements combining security and operational aspects, the TREDISEC project is now moving into the design of the various security primitives (WP3, WP4 and WP5) and further into the orchestration of these individual modules.
The objective of the TREDISEC project is to develop tools that enhance the confidentiality and integrity of the data and computations outsourced to the cloud. While a number of solutions already address some cloud security problems, the new TREDISEC framework will be designed to integrate various security primitives into a unified framework without sacrificing the scalable advantages of cloud computing.
The purpose of this deliverable is to explore the various functional and non-functional requirements (including security and privacy requirements) of cloud storage and computation systems and identify not only the most relevant ones but also those which may not be met simultaneously. With this aim, the following methodology has been applied:
• The six representative TREDISEC use cases have been analysed and a complete set of functional requirements is derived: these requirements must basically be fulfilled for the correct operation of the cloud system. On the other hand, the major security and privacy requirements of these use cases are also highlighted targeting the protection (privacy and integrity) of storage and computation operations.
• Since the description of the use cases and the derived security requirements are high-level, the deliverable further focuses on the different primitives the project aims at designing (in WP3, WP4 and WP5): Once the dedicated security and privacy requirements are defined the document explains how these requirements affect the functional requirements and specify the ultimate (and sometimes conflicting) TREDISEC requirements which basically combine one security requirement with one or several functional requirements.
• As the final target of the project is the development of a unified framework (WP6) integrating the different security primitives, this document also outlines the requirements with respect to the architecture of the framework that will help TREDISEC developer and administrators to choose the most convenient architectural approach and specify technical details. These requirements are differentiated with respect to their technical, business, and quality nature.
Thanks to the specification of the requirements combining security and operational aspects, the TREDISEC project is now moving into the design of the various security primitives (WP3, WP4 and WP5) and further into the orchestration of these individual modules.
Publication of book with the full papers presented in the E-Democracy 2015 conference
The book E-Democracy – Citizen Rights in the World of the New Computing Paradigms has recently been published in electronic and print format, in the Springer website.
This book constitutes the refereed proceedings of the 6th International Conference on E-Democracy, E-Democracy 2015, held in Athens, Greece, in December 2015.
It contains13 revised full papers presented together with 8 extended abstracts that were selected from 33 submissions.
The book E-Democracy – Citizen Rights in the World of the New Computing Paradigms has recently been published in electronic and print format, in the Springer website.
This book constitutes the refereed proceedings of the 6th International Conference on E-Democracy, E-Democracy 2015, held in Athens, Greece, in December 2015.
It contains13 revised full papers presented together with 8 extended abstracts that were selected from 33 submissions.
The papers are organized in topical sections on privacy in e-voting, e-polls and e-surveys; security and privacy in new computing paradigms; privacy in online social networks; e-government and e-participation; legal issues. The book also contains the extended abstracts describing progress within European research and development projects on security and privacy in the cloud; secure architectures and applications; enabling citizen-to-government communication.
TREDISEC was selected to be presented in the conference, after the approval of the extended abstract sent by the consortium project.
Panos Louridas, from GRNET, one of the companies that takes part of the TREDISEC consortium, was responsible of presenting TREDISEC at the conference, explaining its contribution for Security, Privacy and Trust fields.
TREDISEC extended abstract is contained in the book from page 193.
TREDISEC: Towards Realizing a Truly Secure and Trustworthy Cloud
Article entitled "TREDISEC: Towards Realizing a Truly Secure and Trustworthy Cloud" has been published in issue no. 104 (corresponding to January 2016) of the magazine ERCIM News, in the section devoted to present Research and Innovation initiaves.
Article entitled "TREDISEC: Towards Realizing a Truly Secure and Trustworthy Cloud" has been published in issue no. 104 (corresponding to January 2016) of the magazine ERCIM News, in the section devoted to present Research and Innovation initiaves.
The article is a joint work of the project consortium and authored by Beatriz Gallego-Nicasio (ATOS), Melek Önen (EURC) and Ghassan Karame (NEC), and presents an overview of the project, its main research and innovation challenges and an outline of the validation approach that is going to be followed.
The issue is published online at http://ercim-news.ercim.eu/en104 and can be also download as PDF or EPUB formats:
http://ercim-news.ercim.eu/images/stories/EN104/EN104-web.pdf
http://ercim-news.ercim.eu/images/stories/EN104/EN104.epub
TREDISEC: 10 months later

Problem context
End-to-end security comes at odds with current functionality offered by the cloud. Existing state of the art solutions completely give up one requirement for the other. End-to-end security aims to endow the users with full control over their outsourced data, but cloud service providers may not be able to efficiently process clients' data, nor may they be able to take full advantage of cost-effective storage solutions which rely on existing deduplication and compression mechanisms.
Another important point that should not be overlooked when designing security mechanisms for cloud systems is their integration into a single framework. Typically, a security primitive is devised for a single use-case and/or a specific application. Although such a design approach may reduce the complexity of the solution, it may lead to situations where security primitives are incompatible to the point that they cannot be implemented using the same interface or the same framework.

Progress towards the objectives and advance beyond the state of the art
During this reporting period, the TREDISEC consortium partners have been focusing in designing novel end-to-end security solutions for scenarios with conflicting functional and security requirements, using as bases the representative scenarios and use-cases defined by the end-user partners. We first had to identify the functional requirements that are crucial to the cloud business and explore non-functional requirements such as storage efficiency and multi-tenancy. Next, we had to analyse the conflicts between these requirements and security needs in order to develop new solutions that address these shortcomings and enhance security. Moreover, state of the art mechanisms and solutions have been analysed thoroughly in technical work-packages (WP2, WP3, WP4 and WP5). In particular, some partners of the consortium have already achieved the following advances:
- devise new primitives to support data confidentiality and data deduplication, including the analysis of its compatibility with Proof of Ownership (PoW) mechanisms;
- actively analyse the state of the art with respect to searchable encryption, secure biometric computations, and possible parallel computation and migration mechanisms;
- describe mechanisms for an optimized storage of encrypted data based on the analysis of historical or anticipated SQL queries;
- conduct a thorough survey on the state of the art on verifiable storage, verifiable computation and verifiable ownership topics in order to identify the TREDISEC specific requirements have been conducted;
- proposed a new security model for outsourced proof of retrievability;
- propose a study on the possibility of applying verifiable computing techniques to biometric comparison;
- investigating approaches to vulnerability discovery and isolation in file systems that are used to provide storage for cloud services;
- proposed a novel mechanism which enables the emerging many-core processor architectures to provide secure isolation properties for cloud platforms and especially IaaS deployments.
The design of the TREDISEC framework which efficiently integrates the required security primitives, without incurring extra processing and storage cost at the cloud service providers or end-users, has been also a key activity during these last months. The ultimate goal of the TREDISEC framework is to facilitate the orchestration of different security primitives deployed into real cloud systems.
A first architectural model of the framework has been outlined, taking into account business, quality and operational requirements, since it should support a range of stakeholders (e.g. security administrators, developers, or cloud system engineers) and target cloud offerings.
By using the framework, security primitives can be tested in isolation or combined with others, in order to produce pre-packaged security solutions ready to be deployed, which are guaranteed of being free of incompatibilities, but also should permit cloud system engineers and security experts to select, according to their own system needs, the functional and non-functional (security and privacy) requirements they wish TREDISEC to fulfil.
Summary of work performed and main achievements

Since the project kick-off, on the 1st of April, until the 31st of December 2015, which spans from M1-M9 according to the project plan, the activities performed by the TREDISEC consortium can be structured along the following lines of work:
- Launching of the project and setting up the different procedures (quality, reporting, risk management, document/output storage and management, deliverable quality review, etc.), management structure, guidelines and supporting tools to enable a seamless and fruitful collaboration among the consortium partners, in order to achieve the project objectives and develop the work promised in the DoA according to the schedule. This has been described in a deliverable document released by M3, entitled “D1.1 Project Quality Assurance Plan”.
- Definition of the Innovation strategy for the project and agree on a plan to implement and deploy it within the existing project structures. This consisted in identifying the project key innovation points and specifying “innovation-related activities” such as monitoring, emergency plans, or take-up activities, definition of a framework for assessment of the project innovation health level and strategies to identifying and acquiring feedback from different entities and communities to better align the project results with users’ expectations. This has been described in a deliverable document released by M3, entitled “D1.5 Innovation Strategy and Plan”. In the last quarter of the period, a first innovation check has been done by the Innovation Director (from NEC) with the work-package leaders in relation to the identified key innovations of TREDISEC. The result was that, so far, there are no identified threats in the market to the expected TREDISEC innovations.
- Definition of a common project strategy for dissemination and communication of project advances and results, to set the base-line for individual partner’s activities, in order to reach the maximum impact possible. The strategy is accompanied with a plan that establishes a series of activities to promote the project along its entire duration, as well as a complete set of graphical material that supports these activities. The graphical material entails the project branding (i.e. logo, colour code, templates for documents, a poster and a promotional brochure/flyer); the project website (www.tredisec.eu) online since M2, is publicly accessible; this website is considered as the main point of contact from externals and as the first means for dissemination and communication of project advances and regular achievements (the website constitutes a deliverable and is described in the accompanying document “D7.1 TREDISEC public website”); social media accounts (i.e. dedicated LinkedIn group and twitter account); infographics ( within this period, one infographic has been made available through the website); and press releases and campaigns, to promote the project official start and the networking session at the ICT 2015 event, which TREDISEC was co-organised and where there was a scheduled talk about one specific project line of research. The communication and dissemination activities are grouped into phases, each one focusing on the promotion of certain aspects of the project, with customized key messages and targeting different type of audience (i.e. scientific, research, industry, citizens, public administration, policy-makers, etc.), making use of the most appropriate channel in each case. The dissemination and communication strategy and the associated implementation plans have been defined in two deliverable documents “D7.2 Dissemination plan” and “D7.3 Communication strategy and plan” respectively, both released in M6.
- Launching of the technical work-packages devoted to the research and development of the security primitives. Each of these work-packages, namely WP3, WP4 and WP5, focus in analysing first the different conflicts that may arise, when trying to satisfy at the same time cloud functional requirements (e.g. efficiency, reduced costs) while providing security guarantees (e.g. confidentiality, integrity); and second, researching on different schemes and primitives that overcome those conflicts.
- Description of the context scenarios and specification of the use cases that will be used to drive the technical developments and evaluate the project results. Four partners of the project (SAP, GRNET, ARSYS and MORPHO) described their context scenarios and use cases, which will be used in the project with two purposes: (i) to elicit a series of end-user requirements that will influence the design of the TREDISEC framework architecture and the security primitives developed in the technical work-packages (i.e. 3, 4 and 5); and (ii) to set up the context for the evaluation activities that will take place in the last year of the project in the context of WP6. The descriptions have been compiled into a deliverable document released by M6, entitled “D2.1 Description of the context scenarios and use cases definition”, which constitutes the achievement of the first project milestone: “MS1: Use cases and scenario context definition”.
- Specification of the requirements for the TREDISEC framework and the security primitives. As indicated in the previous point, the use case scenarios propose a series of requirements for TREDISEC technical activities from the user point of view. Besides these, the actual technological challenges the project aims to face, that is the lack of practical solutions that enable combining efficiency and security aspects in current cloud solutions, are also a source of requirements for the TREDISEC developments. All these requirements are listed and a trade-off analysis is described in a deliverable document entitled “D2.2 Requirements analysis and consolidation”, released in M9.
- Outline a proposal of architectural model for the TREDISEC framework, taking into account the requirements identified in Task 2.1. This first draft analysed first, various state of the art reference architectures of cloud systems, and second, proposed an approach that permits combinations of security primitives holistically working together in a range of cloud-based settings.
- Conduct an Initial prospect of the market and identification of suitable commercialization options for the TREDISEC outputs (i.e. the framework and the security primitives). In order to evaluate the most appropriate business model for TREDISEC that will influence the framework architecture, the implementation approach and operational model, on the one hand, and the exploitation strategies on the other hand.
2nd IEEE International Workshop on Secure Identity Management in the Cloud Environment (SIMICE 2016)
The 2nd IEEE International Workshop on Secure Identity Management in the Cloud Environment (SIMICE-2016) adjunct to the 40th IEEE Computer Society International Conference on Computer, Software&Applications Conference (COMPSAC 2016), will take place in Atlanta, Georgia, USA from 10th to 14th of June 2016.
The 2nd IEEE International Workshop on Secure Identity Management in the Cloud Environment (SIMICE-2016) adjunct to the 40th IEEE Computer Society International Conference on Computer, Software&Applications Conference (COMPSAC 2016), will take place in Atlanta, Georgia, USA from 10th to 14th of June 2016.
The workshop is dedicated to the security and privacy aspects of identity management (IDM) in the cloud. Two tracks namely "Concept design and enbling technologies" and "Applications and evaluations" are planned to attract both theoretical and empirical works form the IDM society and the cloud computing society.
This workshop counts on TREDISEC participation. Julien Bringer, from SAFRAN Morpho, has worked in the project since the beginning, being responsible to develop specific use cases as initial test of the technology.
Julien Bringer will take part in SIMICE-2016 as co-organizer and member of the Program Committee that will evaluate the papers sent.
More details about the workshop here: http://staging.computer.org/web/compsac2016/simice
Abstract
With the advent of cloud computing, individuals and companies alike are looking for opportunities to leverage cloud resources not only for storage but also for computation. Nevertheless, the reliance on the cloud to perform computation raises the unavoidable challenge of how to assure the correctness of the delegated computation. In this regard, we introduce two cryptographic protocols for publicly verifiable computation that allow a lightweight client to securely outsource to a cloud server the evaluation of highdegree univariate polynomials and the multiplication of large matrices. Similarly to existing work, our protocols follow the amortized verifiable computation approach.
Furthermore, by exploiting the mathematical properties of polynomials and matrices, they are more efficient and give way to public delegatability. Finally, besides their efficiency, our protocols are provably secure under well-studied assumptions.
TREDISEC mentioned in Atos Research & Innovation group (ARI) booklet 2016
Atos Research & Innovation group (ARI), hub for research and development in new technologies and a key reference for the whole Atos group has launched the yearly report of Atos Research & Innovation (ARI) activities in 2015.
ARI focus is to investigate emerging technologies and anticipate market demand with innovative solutions.
This year, ARI has proved its success in several projects, providing innovative services to customers.
Atos Research & Innovation group (ARI), hub for research and development in new technologies and a key reference for the whole Atos group has launched the yearly report of Atos Research & Innovation (ARI) activities in 2015.
ARI focus is to investigate emerging technologies and anticipate market demand with innovative solutions.
This year, ARI has proved its success in several projects, providing innovative services to customers.
For instance, ARI has led TREDISEC project, which aims at increasing trust in cloud computing by designing new security
primitives ensuring data security and user privacy and supporting the underlying storage and computation technology at the same time.
In pages 37,38 of the booklet, it is available more details about on-going cybersecurity projects of the group, and more specifically about TREDISEC.
TREDISEC mentioned in Atos Research & Innovation group (ARI) booklet 2015
Atos Research & Innovation group (ARI), hub for research and development in new technologies and a key reference for the whole Atos group has launched the yearly report of Atos Research & Innovation (ARI) activities in 2015.
ARI focus is to investigate emerging technologies and anticipate market demand with innovative solutions.
This year, ARI has proved its success in several projects, providing innovative services to customers.
Atos Research & Innovation group (ARI), hub for research and development in new technologies and a key reference for the whole Atos group has launched the yearly report of Atos Research & Innovation (ARI) activities in 2015.
ARI focus is to investigate emerging technologies and anticipate market demand with innovative solutions.
This year, ARI has proved its success in several projects, providing innovative services to customers.
For instance, ARI has led TREDISEC project, which aims at increasing trust in cloud computing by designing new security
primitives ensuring data security and user privacy and supporting the underlying storage and computation technology at the same time.
In pages 37,38 of the booklet, it is available more details about on-going cybersecurity projects of the group, and more specifically about TREDISEC.
WITDOM - empoWering prIvacy and securiTy in non-trusteD envirOnMent
H2020 Research and Innovation action. The project started in January 2015. The main objective is to produce a framework for end-to-end (E2E) protection of data in untrusted and fast evolving ICT-based environments.

H2020 Research and Innovation action. The project started in January 2015. The main objective is to produce a framework for end-to-end (E2E) protection of data in untrusted and fast evolving ICT-based environments.
CREDENTIAL - Secure Cloud Identity Wallet
H2020 Innovation action. The project started in October 2015. The main idea of CREDENTIAL is to enable end-to-end security and improved privacy in cloud identity management services for managing secure access control. This is achieved by advancing novel cryptographic technologies and improving strong authentication mechanisms.

H2020 Innovation action. The project started in October 2015. The main idea of CREDENTIAL is to enable end-to-end security and improved privacy in cloud identity management services for managing secure access control. This is achieved by advancing novel cryptographic technologies and improving strong authentication mechanisms.
PRISMACLOUD - PRIvacy and Security MAintaining services in the CLOUD
The main idea and ambition of PRISMACLOUD is to enable end-to-end security for cloud users and provide tools to protect their privacy with the best technical means possible - by cryptography.

The main idea and ambition of PRISMACLOUD is to enable end-to-end security for cloud users and provide tools to protect their privacy with the best technical means possible - by cryptography.
WISER - Wide-Impact cyber Security Risk framework
WISER is a European initiative that puts cyber-risk management at the very heart of good business practice, benefitting multiple industries in particular critical infrastructure and process owners, and ICT-intensive SMEs. Kicking off in June 2015, by 2017 WISER will provide a cyber-risk management framework able to assess, monitor and mitigate the risks in real time.

WISER is a European initiative that puts cyber-risk management at the very heart of good business practice, benefitting multiple industries in particular critical infrastructure and process owners, and ICT-intensive SMEs. Kicking off in June 2015, by 2017 WISER will provide a cyber-risk management framework able to assess, monitor and mitigate the risks in real time.
Data Protection, Security and Privacy (DPSP) in the Cloud
The cluster was born with the aim to seek synergies between H2020 LEIT WP2014-2015 projects addressing research and innovation on diverse solutions for ensuring data protection, security and privacy in the cloud, to join efforts towards achieving a greater impact.
The cluster was born with the aim to seek synergies between H2020 LEIT WP2014-2015 projects addressing research and innovation on diverse solutions for ensuring data protection, security and privacy in the cloud, to join efforts towards achieving a greater impact.
Challenges for trustworthy (multi-)Cloud-based services in the Digital Single Market
The Data Protection Security and Privacy (DPSP) in the Cloud Cluster has published the Whitepaper Challenges for trustworthy (multi-)Cloud-based services in the Digital Single Market.
The Data Protection Security and Privacy (DPSP) in the Cloud Cluster has published the Whitepaper Challenges for trustworthy (multi-)Cloud-based services in the Digital Single Market.
The future Digital Single Market (DSM) poses a number of research challenges for future years. Particularly, the DSM Initiative #14 on “Free flow of data” directly impacts on a number of security and privacy issues on (multi-)cloud-based services and cloud services. The objective of this White paper is to develop an initial map of challenges identified by the DPSP Cluster projects related to the DSM Initiative #14 topics at the right level of abstraction that could be reused by the EC and policy makers. The map includes collection of the challenges more relevant for the next Horizon 2020 Work Programme 2018-2020.
Blog post about Tredisec on IBM research news
With the title "IBM Scientists bring trust and reliability to the cloud with advanced cryptography in EU project" IBM Research blog has published an interview to IBM Scientists about the upcoming challenges of TREDISEC project and it´s impact on security and efficiency in tomorrow´s cloud.
With the title "IBM Scientists bring trust and reliability to the cloud with advanced cryptography in EU project" IBM Research blog has published an interview to IBM Scientists about the upcoming challenges of TREDISEC project and it´s impact on security and efficiency in tomorrow´s cloud.
SECODIC 2016: Secure and Efficient Outsourcing of Storage and computation of Data in the Cloud
The H2020 projects WITDOM and TREDISEC, led by Atos, co-organize the workshop on "Secure and Efficient Outsourcing of Storage and computation of Data in the Cloud" (SECODIC 2016), which will be held in conjunction with the ARES EU Projects Symposium 2016, held at the 11th International Conference on Availability, Reliability and Security (ARES 2016) on August 31-September 2nd in Salzburg, Austria.
The H2020 projects WITDOM and TREDISEC, led by Atos, co-organize the workshop on "Secure and Efficient Outsourcing of Storage and computation of Data in the Cloud" (SECODIC 2016), which will be held in conjunction with the ARES EU Projects Symposium 2016, held at the 11th International Conference on Availability, Reliability and Security (ARES 2016) on August 31-September 2nd in Salzburg, Austria.
During the workshop hot topics related to end-to-end security, privacy, and data protection in the cloud and advances in the field will be discussed. The workshop is expected to give extensive insights into the state-of-the-art in cloud technologies and novel perspectives for ensuring security and privacy in the cloud. The workshop will be an excellent venue for security experts and cloud providers who want to keep up with new research advances in the area of cloud security.
The registration to the SECODIC workshop is handled through the main ARES conference registration system.
• http://www.ares-conference.eu/conference/workshopsares2016/secodic-2016/
The 7 key innovation points of TREDISEC
Most existing cloud security solutions are not well-suited in the market because they either provide security at the expense of the economy of scale and cost effectiveness of the cloud (e.g. data is encrypted before being outsourced, which prevents any computation to be performed in the cloud), or they meet the latter objectives at the expense of security (e.g., data deduplication and compression optimally use the resources of the cloud provider but require the customer to blindly trust its cloud provider).

Most existing cloud security solutions are not well-suited in the market because they either provide security at the expense of the economy of scale and cost effectiveness of the cloud (e.g. data is encrypted before being outsourced, which prevents any computation to be performed in the cloud), or they meet the latter objectives at the expense of security (e.g., data deduplication and compression optimally use the resources of the cloud provider but require the customer to blindly trust its cloud provider).
The main aim of TREDISEC is to bridge this gap by developing tools and systems to address these shortcomings and to enhance the confidentiality and integrity of data outsourced to the cloud without affecting functionality, and storage efficiency.
From a practical standpoint, the ambition of this project is to develop systems and techniques that make the cloud a secure and efficient place to store data. We plan to step away from a myriad of disconnected security protocols or cryptographic algorithms, and to converge instead on a (possibly standardized) single framework where all objectives are met to the highest extent possible.
Based on our assessment, we identify the 7 key innovation points of TREDISEC shown in the figure below.
TREDISEC: Trust-aware REliable and Distributed Information SEcurity in the Cloud
Paper entitled "TREDISEC: Trust-aware REliable and Distributed Information SEcurity in the Cloud", by TREDISEC's consortium, was accepted by in the International Conference on e-Democracy 2015.
The paper is co-authored by TREDISEC consortium, and is related to the main topic chosen this year for the onference "Citizen rights in the world of the new computing paradigms"
The paper has been published in a book that contains 13 revised full papers presented together with 8 extended abstracts that were selected from 33 submissions.
Abstract
Paper entitled "TREDISEC: Trust-aware REliable and Distributed Information SEcurity in the Cloud", by TREDISEC's consortium, was accepted by in the International Conference on e-Democracy 2015.
The paper is co-authored by TREDISEC consortium, and is related to the main topic chosen this year for the onference "Citizen rights in the world of the new computing paradigms"
The paper has been published in a book that contains 13 revised full papers presented together with 8 extended abstracts that were selected from 33 submissions.
Abstract
Cloud computing services are increasingly being adopted by individuals and companies thanks to their various advantages such as high storage and computation capacities, reliability and low maintenance costs. Yet, data security and user privacy remain
the major concern for cloud customers since by moving their data and their computing tasks into the cloud they inherently lend the control to cloud service providers. Therefore, customers nowadays call for end-to-end security solutions in order to retain full control over their data.
Confirmed Mr. N. Asokan as initial keynote speaker at TREDISEC-WITDOM workshop in ARES Conference
SECODIC 2016, the workshop organized jointly by H2020 funded projects TREDISEC and WITDOM, is honoured to announce to Mr. N. Asokan as speaker of the initial keynote that will open the work day.
Mr. N. Asokan is a Professor of Computer Science at Aalto University.
SECODIC 2016, the workshop organized jointly by H2020 funded projects TREDISEC and WITDOM, is honoured to announce to Mr. N. Asokan as speaker of the initial keynote that will open the work day.
Mr. N. Asokan is a Professor of Computer Science at Aalto University.
Between 1995 and 2012, he worked in industrial research laboratories designing and building secure systems, first at the IBM Zurich Research Laboratory and then at Nokia Research Center. His primary research interest has been in applying cryptographic techniques to design secure protocols for distributed systems. Recently, he has also been investigating the use of Trusted Computing technologies for securing endnodes, and ways to make secure systems usable, especially in the context of mobile devices.
Asokan received his doctorate in Computer Science from the University of Waterloo, MS in Computer and Information Science from Syracuse University, and BTech (Hons.) in Computer Science and Engineering from the Indian Institute of Technology at Kharagpur. He is an ACM Distinguished Scientist and an IEEE Senior Member.
For more information about Asokan's work see his website at http://asokan.org/asokan/
TREDISEC will participate in Trust in Digital Life event
This year's Trust in the Digital World event is to be held at the New Babylon Centre in The Hague on 15/16 June. This event is a mixture of practical demonstrations, presentations, panel discussions and “un-conference” session covering key challenges, visions and strategies. It is envisioned for those in business, public sector and government who are involved in the policy, security, systems and processes surrounding trust.
This year's Trust in the Digital World event is to be held at the New Babylon Centre in The Hague on 15/16 June. This event is a mixture of practical demonstrations, presentations, panel discussions and “un-conference” session covering key challenges, visions and strategies. It is envisioned for those in business, public sector and government who are involved in the policy, security, systems and processes surrounding trust.
TREDISEC will have large representation due to the presence of several members with key roles in the development of the project.
Ghassan Karame, Innovation Manager of TREDISEC, will chair a panel on cloud security. The panel has as topic "Reconciliating Security and Functional Requirements in the Cloud", and will be tentative by 3-4 senior experts in the field including representatives from industry, and academia.
One of the confirmed panellist is Melek Önen, Dissemination Manager of TREDISEC. Melek Önen is a senior researcher at EURECOM. Her current research interests are the design of security and privacy protocols for various communication networks such as ad hoc networks, sensor networks, opportunistic networks and social networks. She has been involved in many European and national French research projects.
The Trust in Digital Life (TDL) community was formed by leading industry partners and knowledge institutes that hold trust and trustworthy services to be an essential ingredient of the digital economy. The TDL community is committed to enabling a trustworthy ecosystem that protects the rights of citizens while creating new business opportunities.
“Trust in the Digital World” event is supported by Gemeente Den Haag (GDH), DG CONNECT (European Commission) and ENISA.” It is Europe’s leading independent, interdisciplinary, unbiased and European focused conference.
TREDISEC Requirements
TREDISEC aims at providing a set of security primitives that will ensure the confidentiality and integrity of the outsourced data and computations to the cloud. To help with the design of these primitives, towards the end of December 2015, we have identified the different TREDISEC requirements ranging from functional prerequisites to specific security and privacy needs. With this aim, the following methodology has been applied:

TREDISEC aims at providing a set of security primitives that will ensure the confidentiality and integrity of the outsourced data and computations to the cloud. To help with the design of these primitives, towards the end of December 2015, we have identified the different TREDISEC requirements ranging from functional prerequisites to specific security and privacy needs. With this aim, the following methodology has been applied:
-
To identify TREDISEC requirements, we first started with the analysis of the six TREDISEC use cases which are categorized into two main categories:
- File sharing services which deal with data outsourcing in a multi-tenant environment (UC1, UC2 and UC3)
- Big Data storage and secure processing services which mainly focus on the case where customers outsource a very large amount of data to be processed at the cloud (UC4, UC5, UC6)
For each use case, we identified the major functional requirements which encompass the basic functionalities of cloud service providers and the generic security and privacy requirements that deal with the set of functionalities that cloud service providers should implement to assure a privacy preserving and secure storage and processing service. The two tables below go over the entire set of functional requirements for each use case and the basic security and privacy requirements, namely: storage and computation integrity, and, storage and computation privacy.
-
We further focus on the specification of security and privacy requirements for each technical work package delivering TREDISEC security primitives, namely WP3 (verifiability), WP4 (confidentiality and access control) , and WP5 (privacy preserving data processing) and analyze the main conflicts between the security and privacy requirements and the functional ones. We finally end up with the resulting TREDISEC requirements resulting from this trade-off analysis. The following figure tries to summarize these requirements combining security and privacy with functionality with respect to the use cases and the technical work packages. The complete list of these requirements can be found in deliverable D2.2.
-
As the final target of TREDISEC is the development of a unified framework integrating different security primitives, we also identified the requirements with respect to the architecture of the framework which are differentiated regarding their technical, business and quality nature. These are depicted in the following figure.
Thanks to the specification of the requirements combining security and operational aspects, the TREDISEC project is now moving on to the design of the various security primitives (WP3, WP4 and WP5) and further on to the orchestration of these individual modules.
Mirror: Enabling Proofs of Data Replication and Retrievability in the Cloud
Paper entitled “Mirror: Enabling Proofs of Data Replication and Retrievability in the Cloud” has been accepted at Usenix Security 2016.
This work includes a partial acknowledgment of the TREDISEC project, in the field of data integrity technologies encompassed in WP3.
Abstract:
Paper entitled “Mirror: Enabling Proofs of Data Replication and Retrievability in the Cloud” has been accepted at Usenix Security 2016.
This work includes a partial acknowledgment of the TREDISEC project, in the field of data integrity technologies encompassed in WP3.
Abstract:
Proofs of Retrievability (POR) and Data Possession (PDP) are cryptographic protocols that enable a cloud provider to prove that data is correctly stored in the cloud. PDP have been recently extended to enable users to check in a single protocol that additional file replicas are stored as well. To conduct multi-replica PDP, users are however required to process, construct, and upload their data replicas by themselves. This incurs additional bandwidth overhead on both the service provider and the user and also poses new security risks for the provider. Namely, since uploaded files are typically encrypted, the provider cannot recognize if the uploaded content are indeed replicas. This limits the business models available to the provider, since
e.g., reduced costs for storing replicas can be abused by users who upload different files—while claiming that they
are replicas. In this paper, we address this problem and propose a novel solution for proving data replication and retrievability in the cloud, Mirror, which allows to shift the burden of constructing replicas to the cloud provider itself—thus
conforming with the current cloud model. We show that Mirror is secure against malicious users and a rational cloud provider. Finally, we implement a prototype based on Mirror, and evaluate its performance in a realistic cloud setting. Our evaluation results show that our proposal incurs tolerable overhead on the users and the cloud provider.
Available Agenda for the ARES Conference Workshop organized by TREDISEC-WITDOM projects
It is already available the agenda forescasted for the next workshop that will be held in Salzburg next 31th August at ARES Conference.
There will be a varied group of speakers from Atos Spain, NEC Germany, IBM Switzerland and EURECOM France.
It is already available the agenda forescasted for the next workshop that will be held in Salzburg next 31th August at ARES Conference.
There will be a varied group of speakers from Atos Spain, NEC Germany, IBM Switzerland and EURECOM France.
The initial keynote will be given by N. Asokan, a recognized researcher mainly focused on applying cryptographic techniques to design secure protocols for distributed systems Recently, he has also been investigating the use of Trusted Computing technologies for securing endnodes, and ways to make secure systems usable, especially in the context of mobile devices.
Agenda SECODIC 2016 TREDISEC/ WITDOM Workshop
10:30 - 11:00 | Introduction | 15 min |
Overview WITDOM - Elsa Prieto (Atos) | 15 min | |
Overview TREDISEC - Ghassan Karame (NEC) | ||
11:00 - 11:30 | Keynote by N. Asokan, "Securing Cloud-assisted Services" | |
11:30 - 12:00 | Private and Secure Data Storage in the Cloud (I) | 25 min |
Eduarda Freire (IBM), talk on Data Masking | ||
13:00 - 14:00 | Private and Secure Data Storage in the Cloud (II) | 25 min |
Florian Thiemer (Franhoufer) talk "Data Sharing in the cloud with Proxy-Re-Encryption and Malleable Signature" | 25 min | |
Jose Ruiz (Atos), talk on "Data-centric security is the right approach for Digital Single Market" | 10 min | |
Networking session - Eduarda Freire, others | ||
15:15 - 16:45 | Private and Secure Processing in the Cloud | 25 min |
Matthias Neugschwandtner (IBM), talk on "Challenges for Isolating Computational Resources in Cloud Software Stacks" | 25 min | |
Sujoy Sinha Roy (Ku Leuven),talk on "Hardware Assisted Fully Homomorphic Function Evaluation" | 25 min | |
Daniel Slamanig (Graz University), talk on "Malleable Cryptography for Security and Privacy in the Cloud" | 15 min | |
Networking session - Matthias Neugschwandtner, others | ||
17:00 -18:00 | Integrity and Verifiability of Outsourced Data/Computation | 25 min |
Melek Önen (Eurecom), talk on "Verifiable Polynomial Evaluation & Matrix Multiplication" | 25 min | |
James Alderman (Royal Holloway University of London), talk on "Verifiable searchable encryption" | 10 min |
You can find more details in the following link: https://www.ares-conference.eu/conference/ares-eu-symposium/secodic-2016/
International workshop on Secure and Efficient Outsourcing of Storage and Computation of Data in the Cloud
The H2020 projects WITDOM (www.witdom.eu), and TREDISEC (www.tredisec.eu) organize the International Workshop SECODIC 2016 on “Secure and Efficient Outsourcing of Storage and Computation of Data in the Cloud” to be held during the ARES 2016 Conference at the Salzburg University of Applied Sciences, Salzburg, Austria.
The H2020 projects WITDOM (www.witdom.eu), and TREDISEC (www.tredisec.eu) organize the International Workshop SECODIC 2016 on “Secure and Efficient Outsourcing of Storage and Computation of Data in the Cloud” to be held during the ARES 2016 Conference at the Salzburg University of Applied Sciences, Salzburg, Austria.
This workshop aims at discussing the recent advances in managing security and performance in the cloud as well as protection of data at rest and in transit.
This research is not only motivated by users’ satisfaction, but also by the enforcement of European Data Protection Regulations as well as institution’s internal regulations. Since the majority of institutions lack resources and computing power to deal with large amount of data, and therefore outsourcing data to the cloud is strictly necessary, not complying with those regulations means not advancing in research.
These challenges drive a number of EU projects to devise effective solutions that meet the growing need for data protection in a number of security-critical scenarios (e.g. Financial Services and ehealth). Two of these projects are TREDISEC and WITDOM.
The workshop has the honour to include in its agenda a keynote given by professor N.Asokan, distinguished scientific who has focused his research on the application of cryptographic techniques to design secure protocols for distributed systems.
Besides, a selected group of recognized researchers in privacy and security in the cloud will present different topics which are being investigated in the framework of on-going EU projects related to this issue.
Eduarda Freire from IBM, will chair the slot named “Private and Secure Data Storage in the Cloud”. Within this slot, Florian Thiemer (Franhoufer), and Jose Ruíz (Atos) will give respectives talks about data sharing in the cloud.
Next, Mattias Neugschwandtner (IBM) will chair “Private and Secure Processing in the cloud”, with the participation of Sujoy Sinha Roy (Ku Leuven) and Daniel Slamanig (Graz University) who will talk about state-of-the-art cryptographic and homomorphic encryption techniques
Finally, there is a third slot chaired by Melek Önen (EURECOM) called “Integrity and Verifiability of Outsourced Data/ Computation”. Melek also talk about “Efficient Techniques for Publicly Verifiable Delegation of Computation”, and James Alderman (Royal Holloway University of London) will talk about “Verifiable Searchable Encryption”.
In the attached file you can find a flyer of the event with the abstracts of the talks.
For more information visit:
https://www.ares-conference.eu/conference/ares-eu-symposium/secodic-2016/.
Or contact: Elena González, elena.gonzalez@atos.net
Authenticated Encryption with Variable Stretch
Paper entitled “Authenticated Encryption with Variable Stretch” has been accepted at Asiacrypt 2016.
This work relates to the work developed at TREDISEC project in WP5, addressed to design and evaluate new privacy preserving primitives to assure processing services such as word search, lookup and/or retrieval.
Pending to be published.
Abstract:
Paper entitled “Authenticated Encryption with Variable Stretch” has been accepted at Asiacrypt 2016.
This work relates to the work developed at TREDISEC project in WP5, addressed to design and evaluate new privacy preserving primitives to assure processing services such as word search, lookup and/or retrieval.
Pending to be published.
Abstract:
In conventional authenticated-encryption (AE) schemes, the ciphertext expansion, a.k.a. stretch or tag length, is a constant or a parameter of the scheme that must be fixed per key. However, using variablelength tags per key can be desirable in practice or may occur as a result of a misuse. The RAE definition by Hoang, Krovetz, and Rogaway (Eurocrypt 2015), aiming at the best-possible AE security, supports variable stretch among other strong features, but achieving the RAE goal incurs a particular inefficiency: neither encryption nor decryption can be online.
The problem of enhancing the well-established nonce-based AE (nAE) model and the standard schemes thereof to support variable tag lengths per key, without sacrificing any desirable functional and efficiency properties such as online encryption, has recently regained interest as evidenced by extensive discussion threads on the CFRG forum and the CAESAR competition. Yet there is a lack of formal definition for this goal.
First, we show that several recently proposed heuristic measures trying to augment the known schemes by inserting the tag length into the nonce and/or associated data fail to deliver any meaningful security in this setting. Second, we provide a formal definition for the notion of nonce-based variable-stretch AE (nvAE) as a natural extension to the traditional nAE model. Then, we
proceed by showing a second modular approach to formalizing the goal by combining the nAE notion and a new property we call key-equivalent separation by stretch (kess). It is proved that (after a mild adjustment to the syntax) any nAE scheme which additionally fulfills the kess property will achieve the nvAE goal.
Finally, we show that the nvAE goal is efficiently and provably achievable; for instance, by simple tweaks to
off-the-shelf schemes such as OCB.
AyncShock: Exploiting Synchronisation Bugs in Intel SGX Enclaves
Publication is related to WP4 of TREDISEC project.
Abstract
Intel’s Software Guard Extensions (SGX) provide a new hardware-based trusted execution environment on Intel CPUs using secure enclaves that are resilient to accesses by privileged code and physical attackers. Originally designed for securing small services, SGX bears promise to protect complex, possibly cloud-hosted, legacy applications.
Publication is related to WP4 of TREDISEC project.
Abstract
Intel’s Software Guard Extensions (SGX) provide a new hardware-based trusted execution environment on Intel CPUs using secure enclaves that are resilient to accesses by privileged code and physical attackers. Originally designed for securing small services, SGX bears promise to protect complex, possibly cloud-hosted, legacy applications.
In this paper, we show that previously considered harmless synchronisation bugs can turn into severe security vulnerabilities when using SGX. By exploiting use-after-free and time-of-check-to-time-of-use (TOCTTOU) bugs in enclave code, an attacker can hijack its control flow or bypass access control.
We present AsyncShock, a tool for exploiting synchronisation bugs of multithreaded code running under SGX. AsyncShock achieves this by only manipulating the scheduling of threads that are used to execute enclave code. It allows an attacker to interrupt threads by forcing segmentation faults on enclave pages. Our evaluation using two types of Intel Skylake CPUs shows that AsyncShock can reliably exploit use-after-free and TOCTTOU bugs.
Keywords: Intel Software Guard Extensions (SGX); Threading; Synchronisation; Vulnerability Accepted
A transparent defense against USB eavesdropping attacks
This paper is related to WP4.
Abstract
This paper is related to WP4.
Abstract
Attacks that leverage USB as an attack vector are gaining popularity.
While attention has so far focused on attacks that either exploit the host's USB stack or its unrestricted device privileges, it is not necessary to compromise the host to mount an attack over USB. This paper describes and implements a USB sniffing attack. In this attack a USB device passively eavesdrops on all communications from the host to other devices, without being situated on the physical path between the host and the victim device. To prevent this attack, we present UScramBle, a lightweight encryption solution which can be transparently used, with no setup or intervention from the user. Our prototype implementation of UScramBle for the Linux kernel imposes less than 15% performance overhead in the worst case.
Deniable Functional Encryption
Paper related to TREDISEC WP4.
Abstract
Paper related to TREDISEC WP4.
Abstract
Deniable encryption, first introduced by Canetti et al. (CRYPTO 1997), allows a sender and/or receiver of encrypted communication to produce fake but authentic-looking coins and/or secret keys that "open'' the communication to a different message. Here we initiate its study for the more general case of functional encryption (FE), as introduced by Boneh et al. (TCC 2011), wherein a receiver in possession of a key k can compute from any encryption of a message x the value F(k,x) according to the scheme's functionality F. Our results are summarized as follows:
We put forth and motivate the concept of deniable FE, for which we consider two models. In the first model, as previously considered by O'Neill et al. (CRYPTO 2011) in the case of identity-based encryption, a receiver gets assistance from the master authority to generate a fake secret key. In the second model, there are "normal'' and "deniable'' secret keys, and a receiver in possession of a deniable secret key can produce a fake but authentic-looking normal key on its own. This parallels the "mult-distributional'' model of deniability previously considered for public-key encryption.
In the first model, we show that any FE scheme for the general circuit functionality (as several recent candidate construction achieve) can be converted into an FE scheme having receiver deniability, without introducing any additional assumptions.
In addition we show an efficient receiver deniable FE for Boolean Formulae from bilinear maps. In the second (multi-distributional) model, we show a specific FE scheme for the general circuit functionality having receiver deniability. This result additionally assumes differing-inputs obfuscation and relies on a new technique we call {\em delayed trapdoor circuits}. To our knowledge, a scheme in the multi-distributional model was not previously known even in the simpler case of identity-based encryption.
Finally, we show that receiver deniability for FE implies some form of simulation security, further motivating study of the latter and implying optimality of our results.
Searchable Encryption for Biometric Identification Revisited
Paper related to WP5 TREDISEC.
Abstract:
Paper related to WP5 TREDISEC.
Abstract:
Cryptographic primitives for searching and computing over encrypted data have proven useful in many applications. In this paper, we revisit the application of symmetric searchable encryption (SSE) to biometric identification. Our main contribution is two SSE schemes well-suited to be applied to biometric identification over encrypted data. While existing solution uses SSE with single-keyword search and highly sequential design, we use threshold conjunctive queries and parallelizable constructions. As a result, we are able to perform biometric identification over a large amount of encrypted biometric data in reasonable time. Our two SSE schemes achieve a different trade-off between security and efficiency. The first scheme is more efficient, but is proved secure only against non-adaptive adversaries while the second is proved secure against adaptive adversaries.
Delegating Biometric Authentication with the Sumcheck Protocol
Paper related to TREDISEC WP3
Abstract
Paper related to TREDISEC WP3
Abstract
In this paper, we apply the Sumcheck protocol to verify the Euclidean (resp. Hamming) distance computation in the case of facial (resp. iris) recognition. In particular, we consider a border crossing use case where, thanks to an interactive protocol, we delegate the authentication to the traveller. Verifiable computation aims to give the result of a computation and a proof of its correctness. In our case, the traveller takes over the authentication process and makes a proof that he did it correctly leaving to the authorities to check its validity. We integrate privacy preserving techniques to avoid that an eavesdropper gets information about the biometric data of the traveller during his interactions with the authorities. We provide implementation figures for our proposal showing that it is practical.
Study of a verifiable biometric matching
This paper is related to WP3.
Abstract
This paper is related to WP3.
Abstract
In this paper, we apply verifiable computing techniques to a biometric matching. The purpose of verifiable computation is to
give the result of a computation along with a proof that the calculations were correctly performed. We adapt a protocol called
sumcheck protocol and present a system that performs verifiable biometric matching in the case of a fast border control. This is a work in progress and we focus on verifying an inner product. We then give some experimental results of its implementation. Verifiable computation here helps to enforce the authentication phase bringing in the process a proof that the biometric verification has been correctly performed.
A verifiable system for automated face identification
This paper is related to TREDISEC WP3.
Abstract
This paper is related to TREDISEC WP3.
Abstract
In this paper, we consider a use case where an airport passenger travels and uses an automated gate to cross a border. We detail three phases: a pre-check before the arrival at the airport, the travel of the passenger from his check-in to the automated border gates and finally, the crossing of the gate. To accelerate the throughput at the border gates, we want to identify his face among a flight passenger list during the second phase. This identification is split between the passenger who takes a picture of his face with his smartphone and the immigration authorities. We rely on cryptographic verifiable computation techniques to ensure the security of the process. Experimental results show that our protocol is practical.
Initial encryption of large searchable data sets using hadoop
This paper is related to TREDISEC, WP5.
Abstract
This paper is related to TREDISEC, WP5.
Abstract
With the introduction and the widely use of external hosted infrastructures, secure storage of sensitive data becomes more and more important. There are systems available to store and query encrypted data in a database, but not all applications may start with empty tables rather than having sets of legacy data. Hence, there is a need to transform existing plaintext databases to encrypted form. Usually existing enterprise databases may contain terabytes of data. A single machine would require many months for the initial encryption of a large data set. We propose encrypting data in parallel using a Hadoop cluster which is a simple five step process including the Hadoop set up, target preparation, source data import, encrypting the data, and finally exporting it to the target. We evaluated our solution on real world data and report on performance and data consumption. The results show that encrypting data in parallel can be done in a very scalable manner. Using a parallelized encryption cluster compared to a single server machine reduces the encryption time from months down to days or even hours.
Message-Locked Proofs of Retrievability with Secure Deduplication
This paper addresses the problem of data retrievability in cloud computing systems performing deduplication to optimize
their space savings: While there exist a number of proof of retrievability (PoR) solutions that guarantee storage correctness
with cryptographic means, these solutions unfortunately come at odds with the deduplication technology.
This paper addresses the problem of data retrievability in cloud computing systems performing deduplication to optimize
their space savings: While there exist a number of proof of retrievability (PoR) solutions that guarantee storage correctness
with cryptographic means, these solutions unfortunately come at odds with the deduplication technology.
To reconcile proofs of retrievability with le-based crossuser deduplication, we propose the message-locked PoR approach
whereby the PoR effect on duplicate data is identical and depends on the value of the data segment, only. As a proof of concept, we describe two instantiations of existing PoRs and show that the main extension is performed during the setup phase whereby both the keying material and the encoded version of the to-be-outsourced file is computed based on the file itself. We additionally propose a new server-aided message-locked key generation technique that compared with related work offers better security guarantees.
On Information Leakage in Deduplicated Storage Systems
Paper pending to be published.
Paper pending to be published.
Abstract
Privacy-preserving range queries allow encrypting data while still enabling queries on ciphertexts if their corresponding plaintexts fall within a requested range. This provides a data owner the possibility to outsource data collections to a cloud service provider without sacricing privacy nor losing functionality of ltering this data. However, existing methods for range queries either leak additional information (like the ordering of the complete data set) or slow down the search process tremendously by requiring to query each ciphertext in the data collection. We present a novel scheme that only leaks the access pattern while supporting amortized polylogarithmic search time. Our construction is based on the novel idea of enabling the cloud service provider to compare requested range queries. By doing so, the cloud service provider can use the access pattern to speed-up search time for range queries in the future. On the one hand, values that have fallen within a queried range, are stored in an interactively built index for future requests. On the other hand, values that have not been queried do not leak any information to the cloud service provider and stay perfectly secure.
In order to show its practicability we have
Abstract
The software-as-a-service (SaaS) market is growing very fast, but still many clients are concerned about the condentiality
of their data in the cloud. Motivated hackers or malicious insiders could try to steal the clients' data. Encryption is a potential solution, but supporting the necessary functionality also in existing applications is dicult. In this paper, we examine encrypting analytical web applications that perform extensive number processing operations in the database. Existing solutions for encrypting data in web applications poorly support such encryption. We employ a proxy that adjusts the encryption to the level necessary for the client's usage and also supports additively homomorphic encryption. This proxy is deployed at the client and all encryption keys are stored and managed there, while the application is running in the cloud. Our proxy is stateless and we only need to modify the database driver of the application.
We evaluate an instantiation of our architecture on an exemplary application. We only slightly increase page load time on average from 3:1 seconds to 4:7. However, roughly 40% of all data columns remain probabilistic encrypted. The client can set the desired security level for each column using our policy mechanism. Hence our proxy architecture oers a solution to increase the condentiality of the data at the cloud provider at a moderate performance penalty.
D1.3. Second Periodic Activity and Management Report
This deliverable will give a brief summary of the progress of work and management aspects of period M11-M18 for the Commission officer.
This deliverable is Confidential: only for members of the consortium (including the Commission Services)
This deliverable will give a brief summary of the progress of work and management aspects of period M11-M18 for the Commission officer.
This deliverable is Confidential: only for members of the consortium (including the Commission Services)
D2.3. TREDISEC architecture and initial framework design
This deliverable evaluates the architectural models and selects the appropriate one for the project.
This deliverable also provides a first design of the TREDISEC framework.
This deliverable evaluates the architectural models and selects the appropriate one for the project.
This deliverable also provides a first design of the TREDISEC framework.
D3.1. Requirements and trade-off between verifiability and data reduction
This deliverable will identify the specific requirements of TREDISEC use cases and analyze the compatibility of existing verifiability solutions with data reduction techniques.
This deliverable will identify the specific requirements of TREDISEC use cases and analyze the compatibility of existing verifiability solutions with data reduction techniques.
D3.2. Specification and Preliminary Design of Verifiability mechanisms
This report will introduce initial design of different verifiability primitives.
This report will introduce initial design of different verifiability primitives.
D4.1. A Proposal for Access Control Models for Multitenancy
This deliverable will assess current approaches for access control and propose novel models, based on current progress in ABAC models, to cope with multi-tenancy requirements and in particular with distributed attributes.
A mapping between these ABAC-based models to enforceable policy languages (e.g. XACML) will be proposed, including the design of an enforcement component to parse, interpret and execute access control policies.
This deliverable will assess current approaches for access control and propose novel models, based on current progress in ABAC models, to cope with multi-tenancy requirements and in particular with distributed attributes.
A mapping between these ABAC-based models to enforceable policy languages (e.g. XACML) will be proposed, including the design of an enforcement component to parse, interpret and execute access control policies.
D5.2. Optimization of outsourcing activities and initial design of privacy preserving data processing primitives
This deliverable will provide a tool set to optimize the actual outsourcing process, by for example, parallelizing encryption before outsourcing data to the cloud. In this context, we introduce an initial design of privacy preserving primitives for data processing. This deliverable will also comprise the complete design and evaluation of privacy preserving data processing primitives.
This deliverable will provide a tool set to optimize the actual outsourcing process, by for example, parallelizing encryption before outsourcing data to the cloud. In this context, we introduce an initial design of privacy preserving primitives for data processing. This deliverable will also comprise the complete design and evaluation of privacy preserving data processing primitives.
D7.4. First Dissemination and Communication activities reporting
This 1st Dissemination and Communication Activities report collects all the set of activities developed along the 1st year of the project (M01 – M12), using the selected means described in D7.2 and D7.3, and to evaluate if the progress reached to achieve dissemination and communication goals is satisfactory.
Additionally, the section “Next Steps” describes the schedule for the diffusion of the project along the 2nd year.
Horizon 2020 projects are all about impact. While beneficiaries want to see projects yield concrete results, the European Commission is also fond of inspiring success stories that show what difference EU-backed projects could make.
In Horizon 2020, beneficiaries are actually contractually obliged to promote their project and its results, targeting the information to the public, media, or other audiences.
TREDISEC consortium follows a strategy for dissemination and communication and at a consortium level, to set the base-line for individual partner’s activities, in order to reach the maximum impact possible.
• Dissemination strategy was defined in D7.2 as the way TREDISEC engages different dissemination groups including academic researchers and business stakeholders.
• Communication strategy was defined in D7.3 delivered in September 2015, as the combination of rules that are going to guide the information flow from the project towards the outside world.
This 1st Dissemination and Communication Activities report collects all the set of activities developed along the 1st year of the project (M01 – M12), using the selected means described in D7.2 and D7.3, and to evaluate if the progress reached to achieve dissemination and communication goals is satisfactory.
Additionally, the section “Next Steps” describes the schedule for the diffusion of the project along the 2nd year.
This deliverable is Confidential: only for members of the consortium (including the Commission Services)
SUNFISH
SecUre iNFormatIon SHaring in federated heterogeneous private clouds
EU H2020 project which aims to provide a specific and new solution to face the lack of the necessary infrastructure and technology that would allow the European Public Sector Players to integrate their computing clouds. The SUNFISH project aims to reduce the management cost of private clouds owned by Public Administrations, while maintaining required security levels, and to accelerate the transition to 21st century interoperable and scalable public services, boosting enforcement of the European Digital Single Market.

SecUre iNFormatIon SHaring in federated heterogeneous private clouds
EU H2020 project which aims to provide a specific and new solution to face the lack of the necessary infrastructure and technology that would allow the European Public Sector Players to integrate their computing clouds. The SUNFISH project aims to reduce the management cost of private clouds owned by Public Administrations, while maintaining required security levels, and to accelerate the transition to 21st century interoperable and scalable public services, boosting enforcement of the European Digital Single Market.
Sharing Proofs of Retrievability across Tenants
This paper relates to WP3 of TREDISEC with respect to data integrity technologies.
Abstract:
This paper relates to WP3 of TREDISEC with respect to data integrity technologies.
Abstract:
Proofs of Retrievability (POR) are cryptographic proofs which provide assurance to a single tenant (who creates tags using his secret material) that his files can be retrieved in their entirety. However, POR schemes completely ignore storage-efficiency concepts, such as multi-tenancy and data deduplication, which are being widely utilized by existing cloud storage providers. Namely, in deduplicated storage systems, existing POR schemes would incur an additional overhead for storing tenants’ tags which grows linearly with the number of users deduplicating the same file. This overhead clearly reduces the (economic) incentives of cloud providers to integrate existing POR/PDP solutions in their offerings.
In this paper, we propose a novel storage-efficient POR, dubbed SPORT, which transparently supports multi-tenancy and data deduplication.
More specifically, SPORT enables tenants to securely share the same POR tags in order to verify the integrity of their deduplicated files. By doing so, SPORT considerably reduces the storage overhead borne by cloud providers when storing the tags of different tenants deduplicating the same content. We show that SPORT resists against malicious tenants/cloud providers (and against collusion among a subset of the tenants and the cloud). Finally, we implement a prototype based on SPORT, and evaluate its performance in a realistic cloud setting. Our evaluation results show that our proposal incurs tolerable computational overhead on the tenants and the cloud provider.
Securing Cloud Data under Key Exposure
This paper includes a partial acknowledgment of the TREDISEC project, since this paper relates to NEC’s contribution in WP4 of TREDISEC with respect to data confidentiality technologies.
Abstract
This paper includes a partial acknowledgment of the TREDISEC project, since this paper relates to NEC’s contribution in WP4 of TREDISEC with respect to data confidentiality technologies.
Abstract
Recent news reveal a powerful attacker which breaks data confidentiality by acquiring cryptographic keys, by means of coercion or backdoors in cryptographic software. Once the encryption key is exposed, the only viable measure to preserve data confidentiality is to limit the attacker’s access to the ciphertext. This may be achieved, for example, by spreading ciphertext blocks across servers in multiple administrative domains—thus assuming that the adversary cannot compromise all of them. Nevertheless, if data is encrypted with existing schemes, an adversary equipped with the encryption key, can still compromise a single server and decrypt the ciphertext blocks stored therein. In this paper, we study data confidentiality against an adversary which knows the encryption key and has access to a large fraction of the ciphertext blocks. To this end, we propose Bastion, a novel and efficient scheme that guarantees data confidentiality even if the encryption key is leaked and the adversary has access to almost all ciphertext blocks. We analyze the security of Bastion, and we evaluate its performance by means of a prototype implementation. We also discuss practical insights with respect to the integration of Bastion in commercial dispersed storage systems. Our evaluation results suggest that Bastion is well-suited for integration in existing systems since it incurs less than 5% overhead compared to existing semantically secure encryption modes.
Reconciling Security and Functional Requirements in Multi-tenant Clouds
This paper is related to WP3 and WP4 of TREDISEC.
Abstract
This paper is related to WP3 and WP4 of TREDISEC.
Abstract
End-to-end security in the cloud has gained even more importance after the outbreak of data breaches and massive surveillance programs around the globe last year. While the community features a number of cloud-based security mechanisms, existing solutions either provide security at the expense of the economy of scale and cost effectiveness of the cloud (i.e., at the expense of resource sharing and deduplication techniques), or they meet the latter objectives at the expense of security (e.g., the customer is required to fully trust the provider).
In this paper, we shed light on this problem, and we analyze the challenges in reconciling security and functional requirements in existing multi-tenant clouds. We also explore the solution space to effectively enhance the current security offerings of existing cloud services. As far as we are aware, this is the first contribution which comprehensively explores possible avenues for reconciling the current cloud trends with end-to-end security requirements.
AURA: Recovering from Transient Failures in Cloud Deployments
Paper related to WP6 work of TREDISEC.
Abstract
Paper related to WP6 work of TREDISEC.
Abstract
In this work, we propose AURA, a cloud deployment tool used to deploy applications over providers that tend to present transient failures. The complexity of modern cloud environments imparts an error-prone behavior during the deployment phase of an application, something that hinders automation and magnifies costs both in terms of time and money. To overcome this challenge, we propose AURA, a framework that formulates an application deployment as a Directed Acyclic Graph traversal and re-executes the parts of the graph that failed. AURA achieves to execute any deployment script that updates filesystem related resources in an idempotent manner through the adoption of a layered filesystem technique.
In our demonstration, we allow users to describe, deploy and monitor applications through a comprehensive UI and showcase AURA’s ability to overcome transient failures, even in the most unstable environments.
D2.4. Final Architecture and Design of the TREDISEC Framework
This deliverable presents the final version of the architecture of TREDISEC, which covers both the final version of the TREDISEC Framework and the final version of the architecture and life-cycle of the security primitives.
This deliverable presents the final version of the architecture of TREDISEC, which covers both the final version of the TREDISEC Framework and the final version of the architecture and life-cycle of the security primitives.
D1.6. Innovation Management Report
The goal of this document is to outline the current progress of the TREDISEC project from the point of view of Innovation Management activities. Recall that the main purpose of innovation management is to ensure that the project research activities, technological developments, and achievements, are kept well connected to outside technology developments. An additional goal of innovation management here is to maintain low risk level for the project and to prevent the project results from losing relevance given the evolving trends in the market.
The goal of this document is to outline the current progress of the TREDISEC project from the point of view of Innovation Management activities. Recall that the main purpose of innovation management is to ensure that the project research activities, technological developments, and achievements, are kept well connected to outside technology developments. An additional goal of innovation management here is to maintain low risk level for the project and to prevent the project results from losing relevance given the evolving trends in the market.
A Leakage-abuse Attack Against ult-User Searchable Encryption
Paper related to WP5.
Abstract
Paper related to WP5.
Abstract
Searchable Encryption (SE) allows a user to upload data to the cloud and to search it in a remote fashion while preserving the privacy of both the data and the queries. Recent research results describe attacks on SE schemes using the access pattern, denoting the ids of documents matching search queries, which most SE schemes reveal during query processing. However SE schemes usually leak more than just the access pattern, and this extra leakage can lead to attacks (much) more harmful than the ones using basic access pattern leakage only. We remark that in the special case of Multi-User Searchable Encryption (MUSE), where many users upload and search data in a cloud-based infrastructure, a large number of existing solutions have a common leakage in addition to the well-studied access pattern leakage. We show that this seemingly small extra leakage allows a very simple yet powerful attack, and that the privacy degree of the affected schemes have been overestimated. We also show that this new vulnerability affects existing software. Finally we formalize the newly identified leakage profile and show how it relates to previously defined ones.
Confidentiality with Storage Efficiency
- compression of encrypted data
- secure data deduplication
- proof of ownership with data confidentiality

Convergent encryption is a cryptographic primitive introduced by Douceur et al. (Douceur, et al., 2002), attempting to combine data confidentiality with the possibility of data deduplication.
Convergent encryption of a message consists of encrypting the plaintext using a deterministic (symmetric) encryption scheme with a key which is deterministically derived solely from the plaintext. Clearly, when two users independently attempt to encrypt the same file, they will generate the same ciphertext which can be easily deduplicated. Unfortunately, convergent encryption does not provide semantic security as it is vulnerable to content-guessing attacks.
In TREDISEC, we aim at designing solutions for privacy preserving data deduplication that do not rely on fully trusted entities; they will rather leverage novel and innovative mechanisms to ensure that only the data owner can disclose the content of its data.
Confidentiality with Multitenancy
- distributed (ABAC) policy enforcement with multi-tenancy
- efficient shared ownership
- secure data deletion

Cloud systems are composed of several, often complex software modules: in the presence of vulnerabilities or colluding privileged users, a malicious entity can subvert the correct execution of the system and compromise confidentiality and integrity.
Perhaps counter-intuitively, when it comes to a storage system, access control rules must include the support for secure data deletion; that is, the rightful owner must be able to instruct the system to destroy any copy of their data, regardless of caching, snapshots, replicated or erasure-coded copies. Traditional solutions (e.g. digital shredding with overwrite patterns) are either widely impractical when we meet the scale of today's cloud storage systems, or are not fine-grained enough, or fail on specific media (e.g. log-structured systems used in modern SSDs). Cryptographic solutions to this problem have been found (Cachin, et al., 2013), but as we shall see later, they are ineffective when combined with storage efficiency functions, and deduplication in particular.
We postulate that existing cloud storage platforms are still too weak when it comes to isolating tenants and containing attacks, and argue that the threat of unknown vulnerabilities and the subsequent loss of data governance is still one of the main reasons why businesses are still afraid of the cloud. Yet, without resource sharing, the cloud model cannot be successfully implemented.
Confidentiality & Data Processing
- privacy preserving word search with data reduction
- privacy preserving word search with multi-tenancy

Confidentiality of data requires that when users outsource data, the cloud should not learn any information about the data it is storing and the operations performed over it.
Although classical encryption algorithms ensure data confidentiality, they unfortunately prevent the cloud from operating over encrypted data. The obvious approach could be to encrypt all data with a secure encryption algorithm such as AES and store it in the cloud. However, while secure, all data can no longer be processed in the cloud but has to be downloaded and decrypted on the client to execute any query on it. This makes any serious Database as a Service offering questionable and is the way many traditional DBMS like Sybase, Oracle, DB2 or solutions like Dropbox appear to work when they claim to encrypt data and provide cloud storage.
Moreover, both, the queries issued by the user and the result of the queries should remain confidential to the cloud. Existing crypto primitives such as searchable encryption or private information retrieval cannot immediately be adopted by current cloud solutions.
Availability & Integrity with Storage Efficiency
- proofs of retrievability with deduplication
- verifiable computation
- system integrity verification

Whereas POW deals with the assurance that a client indeed possesses a given file, Provable Data Possession (PDP) and Proof of Retrievability (PoR) deal with the dual problem of ensuring - at the client-side - that a server still stores the files it ought to. PoR and PDP schemes address the requirement of data integrity (ensuring that data has not undergone malicious modifications) and availability (ensuring that data is still available in its entirety and can be downloaded if needed).
Trusted Execution Environments offer a way of securing PoR and PDP protocols. In particular, trusted computing based systems can be used to generate proofs supporting properties on the lower layers of the software stack and the function set of the Trusted Platform Module (TPM). While feasible in theory, such approaches still suffer from the limitations highlighted in the previous sections.
Encryption keys stored in the hard disk are susceptible to tampering, TPM solutions offer a protected storage of keys through hardware and protection of authentication credentials by binding them to the platform, providing a stronger mechanism to prevent unauthorized access to the platform and thus, the integrity of the data stored. Authentication built on top of trusted computing services (based on the use of TPMs) provides higher degrees of assurance, but performance overheads introduced can be significant.
End-to-end (E2E) security is increasingly being used as a means to maintain data-at-rest and data-in-transit confidentiality. Within the end-to-end security paradigm, data is encrypted very close to its source at the client side, and the client is the only one in possession of the keys used to encrypt; thus no information is revealed to the cloud provider or other cloud provider tenants.
Database management systems are integral components of many systems as they provide a well-established, efficient and scalable way of processing large amounts of data. Under the cloud paradigm, it becomes extremely appealing to preserve the ability to process data after its migration to the cloud. However, on-demand databases outsourced in the cloud are vulnerable to additional attacks compared to on-premise databases. While the cloud provider organization is usually trusted, its employees like database operators may misuse their elevated privileges to access cloud data.
Encryption keys stored in the hard disk are susceptible to tampering. TPM solutions offer a protected storage of keys through hardware and protection of authentication credentials by binding them to the platform, providing a stronger mechanism to prevent unauthorized access to the platform and thus, the integrity of the data stored. Authentication built on top of trusted computing services (based on the use of TPMs) provides higher degrees of assurance, but performance overheads introduced can be significant.
One important feature that remains an open challenge and which we strive to assure is cloud verifiability. That is, providing cloud customers with necessary means to obtain evidence of the compliance of the services they purchase with the security and the privacy requirements mandated by regulations or SLAs. In the case of computation outsourcing, cloud customers are also interested in solutions that grant them the capability of verifying the correctness of the computations conducted by the cloud service providers.
Securely enforcing data access policies is a challenge of paramount importance in existing clouds. In fact, current clouds do not implement any mechanism to ensure the secure deletion of their data and rely on the cloud to enforce data access decisions between different tenants. This latter limitation becomes especially evident, when the cloud is untrusted to perform such unilateral decisions.
There is no global solution for data deletion in the cloud. TREDISEC will provide architectures and mechanisms to guarantee secure data deletion for cloud storage provider. Given such mechanisms, users will have cryptographic guarantees that their data is timely deleted when they ask the provider to do so. Deletion will account for data available to the user, as well as back-up copies kept by the cloud provider for dependability reasons.
Under storage efficiency, we capture techniques such as compression and deduplication used by storage providers to make an optimal use of their storage resources by reducing the space needed to store client data.
Compression is the process of encoding information using fewer bits than the canonical representation requires. Compression can lead to a reduction of 20% to 70% of disk utilization. Cost reductions arise due to reduction in storage space, real estate, power consumption and cooling.
Deduplication strives instead to discard multiple copies of a common datum; a single copy is stored and extra copies only reference to the original.
Storage efficiency functions are at the heart of every cloud system, and constitute one of the central reasons for the appealing economy of scale of cloud systems. Cloud service providers take advantage of deduplication and compression mechanisms to minimise their storage needs and therefore, their expenditures. Thus, it is very important for TREDISEC solutions not to hinder the deployment of such mechanisms and to work seamlessly on top of them. While storage efficiency is a very important requirement for cloud services, it is more crucial to enable it for the file sharing use-cases.
Multi-tenancy refers to the ability of a system to serve multiple administrative entities (called tenants) with a high degree of resource sharing among tenants (e.g. share CPU time, disk space, etc.).
Ideally a multi-tenant cloud storage system serves requests of multiple customers (tenants) in such a way that computing and storage resources are shared among such customers and this sharing of resources does not weaken system security.
In practice, multi-tenancy is a trade-off between security and costs: the wider the subset of resources shared (e.g., same physical machine vs. same OS), the more the cloud system can amortize costs and increase utilization.
Multi-tenancy can be achieved in several different ways. The simplest, most secure but also most expensive way is by leveraging hardware-level isolation; in this case, the requests of distinct tenants are handled by different hardware; a second approach is based on hardware and platform based virtualization techniques to create multiple virtual nodes and storage facilities (e.g. volumes, file systems, containers) for each tenant; process-level isolation hinges on the isolation provided by multi-user operating systems to separate resources belonging to different tenants; finally, within application-level isolation, the application is enhanced with access control enforcement to grant or deny access to otherwise shared resources.
The cloud services provided by TREDISEC should accommodate a multi-tenant environment. That is, an environment in which multiple users share the ownership of outsourced data, or are permitted to operate on the data without being actually owners. This requirement is more relevant to the use-cases pertaining to file sharing services
D4.4. A proposal for secure enforcement of policies in the Cloud
Cloud systems are a great platform for collaboration and shared resource usage. However, such cloud systems can only be successful if they securely enforce policies in the cloud, as they otherwise put users’ data at risk. During the course of this deliverable we will present three contributions targeted at providing a better enforcement of cloud policies.
We present the implementation of the TREDISEC security primitive Access Control for Multi-tenancy that was outlined as part of deliverable D4.1. Multi-tenancy makes cloud systems attractive for both customers and providers due to the lower costs. However, such systems also require special care in terms of access control as tenants have to be securely separated from each other.
We also present a novel technique aimed at enhancing the collaboration on cloud storage for group members, e.g. a set of employees. Such members want to use collaboratively-accessible cloud storage, but due to data protection regulation they also need secure deletion in order to protect customer privacy and data security.
Finally, we outline a new instantiation of interaction for multiple distrusting parties that want to make shared access control decisions on a shared cloud repository. Our system prevents a single party from monopolizing the access control decisions, but in contrast provides an efficient way for collaborative access control decisions for cloud storage using blockchain technologies.
Cloud systems are a great platform for collaboration and shared resource usage. However, such cloud systems can only be successful if they securely enforce policies in the cloud, as they otherwise put users’ data at risk. During the course of this deliverable we will present three contributions targeted at providing a better enforcement of cloud policies.
We present the implementation of the TREDISEC security primitive Access Control for Multi-tenancy that was outlined as part of deliverable D4.1. Multi-tenancy makes cloud systems attractive for both customers and providers due to the lower costs. However, such systems also require special care in terms of access control as tenants have to be securely separated from each other.
We also present a novel technique aimed at enhancing the collaboration on cloud storage for group members, e.g. a set of employees. Such members want to use collaboratively-accessible cloud storage, but due to data protection regulation they also need secure deletion in order to protect customer privacy and data security.
Finally, we outline a new instantiation of interaction for multiple distrusting parties that want to make shared access control decisions on a shared cloud repository. Our system prevents a single party from monopolizing the access control decisions, but in contrast provides an efficient way for collaborative access control decisions for cloud storage using blockchain technologies.
D6.2. Evaluation criteria
In this deliverable we describe the methodologies that we plan to use in order to evaluate the outcomes of TREDISEC. We present our approach to assess whether the results of the project fulfil the requirements and necessities of the use cases, identified in deliverable D2.1 “Description of the context scenarios and use cases definition”, and to measure to what extent these requirements are met.
TREDISEC has two major technological outcomes: the TREDISEC Framework and the security primitives. In our approach, we perform the assessment of the maturity level of these results by deploying the TREDISEC Framework and security primitives in the use cases of the project and other internal testing environments.
Along the evaluation process we will validate compliance to the requirements identified in WP2 (cf. D2.2 “Requirements Analysis and Consolidation” ), and assess the degree of enhancement brought by the TREDISEC technological outcomes in each use case. On one side, we will evaluate the overall project success by concluding whether the objectives have been achieved. In this case, we refer to the evaluation criteria defined by all the use case owners and the framework owners. On the other side, we evaluate the TREDISEC technological outcomes, i.e. the framework and the security primitives, by deploying them in the use cases and using the corresponding indicators to perform measurements.
In order to homogenise the different evaluations, we have defined two different types of domain-specific indicators to evaluate TREDISEC technologies: use case process indicator, which focuses on the process described in each use case; and technology-related indicators, which focuses on functional and non-functional characteristics of the technologies developed. For each of the objectives a success criterion is defined together with the measurement methodologies.
Notice that for all use cases and the framework, the focus areas to be evaluated along the processes and requirements fulfilment are defined in detail by all use case and framework owners.
In this deliverable we describe the methodologies that we plan to use in order to evaluate the outcomes of TREDISEC. We present our approach to assess whether the results of the project fulfil the requirements and necessities of the use cases, identified in deliverable D2.1 “Description of the context scenarios and use cases definition”, and to measure to what extent these requirements are met.
TREDISEC has two major technological outcomes: the TREDISEC Framework and the security primitives. In our approach, we perform the assessment of the maturity level of these results by deploying the TREDISEC Framework and security primitives in the use cases of the project and other internal testing environments.
Along the evaluation process we will validate compliance to the requirements identified in WP2 (cf. D2.2 “Requirements Analysis and Consolidation” ), and assess the degree of enhancement brought by the TREDISEC technological outcomes in each use case. On one side, we will evaluate the overall project success by concluding whether the objectives have been achieved. In this case, we refer to the evaluation criteria defined by all the use case owners and the framework owners. On the other side, we evaluate the TREDISEC technological outcomes, i.e. the framework and the security primitives, by deploying them in the use cases and using the corresponding indicators to perform measurements.
In order to homogenise the different evaluations, we have defined two different types of domain-specific indicators to evaluate TREDISEC technologies: use case process indicator, which focuses on the process described in each use case; and technology-related indicators, which focuses on functional and non-functional characteristics of the technologies developed. For each of the objectives a success criterion is defined together with the measurement methodologies.
Notice that for all use cases and the framework, the focus areas to be evaluated along the processes and requirements fulfilment are defined in detail by all use case and framework owners.
D7.5. Second Dissemination and Communication activities reporting
The present deliverable D7.5 collects the Dissemination and Communication activities performed by the TREDISEC consortium during the second year of the project, i.e. from April 2016 until March 2017.
The present deliverable D7.5 collects the Dissemination and Communication activities performed by the TREDISEC consortium during the second year of the project, i.e. from April 2016 until March 2017.
The activities performed along this period aim to achieve the main objectives identified in the Communication Strategy and Plan (deliverable D7.3) at the beginning of the project, these objectives are listed next:
• Raise awareness of the need for research in cloud security, and in particular, of the benefits that TREDISEC may bring into society and business;
• Promote the effort of the EU in pushing the investment in technological projects, such as TREDISEC, that span technical, societal, and economical benefits for European citizens;
• Promote the project achievements and emphasize the innovation advances as a key feature of TREDISEC;
• Enhance the reputation of the consortium members;
• Support exploitation of the TREDISEC results and ensure that project outcomes will be taken into production.
The basis of the scientific knowledge developed within the scope of the project has been strengthened through the publication of thirteen papers in refereed conferences and workshops. This means a significant effort in the dissemination of the scientific work conducted in the project, and is one of the most remarkable achievements along this second year of the project.
It is also worth highlighting the organization of the SECODIC Workshop at ARES Conference. With the collaboration among 6 EU-funded projects: WITDOM, PRISMACLOUD, CREDENTIAL, Coco Cloud, and CLARUS and of course TREDISEC. Eleven talks were given by eleven different speakers, standing out the keynote of recognized researcher Professor N. Asokan.
Besides, we have used the resources available to us, such as the graphic material, or the web platform to promote actively the project in the opportunities that has come up along this period.
Next, in the deliverable we describe in detail these activities and the following steps to face the final year of the project from the perspective of the already defined communication strategy.
D7.7. Business Models for TREDISEC
In order to generate revenue within the EU economic area, the consortium has identified business models that will sustain the outcome of the TREDISEC project in terms of business benefits and potential triggers for markets. This document describes the relevant customer segments, depending on their needs, focusing on the sectors with a higher risk of attack (and thus, more interested in improving their systems), and the best way to approach them. We have identified two main channels to reach potential customers: an online channel for reaching users with a technical profile and “turn” them into influencers inside their companies; and a field sales force for reaching larger companies that require customization.
The proposed business model advocates the project deliverables via a unified Framework, which is able to integrate (or “glue together”) all the Security Primitives developed by the project partners.
Following this business model proposal, a “freely” available Framework does not imply that there is no benefit from it. There are numerous ways in which TREDISEC can benefit from the Framework; For example, almost certainly, the Framework will need customization for different installations. So, there will be considerable need for consultancy, maintenance and customization work to be done.
Under the Horizon 2020 Framework Programme, EC demands that supported research projects reach society, promoting their value and the benefits derived from the technological and scientific activity through public funding so they are returned to society. The European Commission states that businesses and consumers still do not feel confident enough to adopt cross-border cloud services for storing or processing data, because of concerns related to security, compliance with fundamental rights (regulations, etc.), and data protection in general.
In deliverable D1.6 “Innovation management report” we have already performed a review and analysis of the state of the art technologies, and concluded that current solutions lack important features from the customer’s point of view (i.e. simplicity, fast deployment, protection against vulnerabilities, etc.). Moreover, both the European Citizens and Organizations report suffering vulnerabilities with an important cost.
Therefore, improving security and privacy features will contribute to evolve the cloud offerings, placing new services and solutions on the market that are aligned with the European directive and the GDPR regulation for security and privacy.
In the TREDISEC project, we address these problems by creating technologies that will impact existing businesses and will generate new profitable business opportunities. Our value proposition is to develop novel, modular, end-to-end security primitives, which also provide functional capabilities and can be provisioned by a unified framework, covering the entire spectrum of cloud-relevant security, functional, and non-functional requirements.
In order to achieve that, TREDISEC proposes a product portfolio which is built around the following key exploitable project results:
• TREDISEC Security Primitives: Software components that address specific combinations of usually exclusive functional-security requirements, such as: confidentiality with storage efficiency, confidentiality with multi-tenancy, confidentiality with efficient data processing and availability, and integrity with storage efficiency.
• TREDISEC Framework: System that will combine and orchestrate the aforementioned security primitives with the objective of creating a single cloud security framework.
TREDISEC products provide and add value to the existing market solutions, such as SAP HANA and ~okeanos by GRNET being deployed within the project consortium, but also outside, such as to OpenStack and to Amazon EC2. In this way, businesses and organizations will be able to, not only address current customers’ security and privacy concerns, but also comply with corporate security requirements and EU data protection rules, without significant additional computational or storage costs, and with negligible reduction in performance.
In order to generate revenue within the EU economic area, the consortium has identified business models that will sustain the outcome of the TREDISEC project in terms of business benefits and potential triggers for markets. This document describes the relevant customer segments, depending on their needs, focusing on the sectors with a higher risk of attack (and thus, more interested in improving their systems), and the best way to approach them. We have identified two main channels to reach potential customers: an online channel for reaching users with a technical profile and “turn” them into influencers inside their companies; and a field sales force for reaching larger companies that require customization.
The proposed business model advocates the project deliverables via a unified Framework, which is able to integrate (or “glue together”) all the Security Primitives developed by the project partners.
Following this business model proposal, a “freely” available Framework does not imply that there is no benefit from it. There are numerous ways in which TREDISEC can benefit from the Framework; For example, almost certainly, the Framework will need customization for different installations. So, there will be considerable need for consultancy, maintenance and customization work to be done.
This deliverable is Confidential: only for members of the consortium (including the Commission Services)
HardIDX: Practical and Secure Index with SGX
Abstract. Software-based approaches for search over encrypted data are still either challenged by lack of proper, low-leakage encryption or slow performance.
Existing hardware-based approaches do not scale well due to hardware limitations and software designs that are not specifically tailored to the hardware architecture, and are rarely well analyzed for their security (e.g., the impact of side channels). Additionally, existing hardware-based solutions often have a large code footprint in the trusted environment susceptible to software compromises.
Abstract. Software-based approaches for search over encrypted data are still either challenged by lack of proper, low-leakage encryption or slow performance.
Existing hardware-based approaches do not scale well due to hardware limitations and software designs that are not specifically tailored to the hardware architecture, and are rarely well analyzed for their security (e.g., the impact of side channels). Additionally, existing hardware-based solutions often have a large code footprint in the trusted environment susceptible to software compromises.
In this paper we present HardIDX: a hardware-based approach, leveraging Intel’s SGX, for search over encrypted data. It implements only the security critical core, i.e., the search functionality, in the trusted environment and resorts to untrusted software for the remainder. HardIDX is deployable as a highly performant encrypted database index: it is logarithmic in the size of the index and searches are performed within a few milliseconds. We formally model and prove the security of our scheme showing that its leakage is equivalent to the best known searchable encryption schemes.
Perfect Dedup
Offers deduplication over encrypted files. It allows different users to upload client-side encrypted files to the cloud, while deduplication technique can still be applied to those encrypted files.
Offers deduplication over encrypted files. It allows different users to upload client-side encrypted files to the cloud, while deduplication technique can still be applied to those encrypted files.
Verifiable Polynomial Evaluation
Cryptographic scheme that enables a cloud provider to evaluate a polynomial over an input received from the user and to prove to a user that the output is actually correct. We consider a scenario whereby a user wishes to outsource a high-degree polynomial P to the cloud server. Further, a queried requests the evaluation of this polynomial over some inputs x. In addition to the output and the server also provides a proof p on the correctness of the output. Finally, the verifier receiving the output and the proof verifies p and concludes whether y equals P(x).
Cryptographic scheme that enables a cloud provider to evaluate a polynomial over an input received from the user and to prove to a user that the output is actually correct. We consider a scenario whereby a user wishes to outsource a high-degree polynomial P to the cloud server. Further, a queried requests the evaluation of this polynomial over some inputs x. In addition to the output and the server also provides a proof p on the correctness of the output. Finally, the verifier receiving the output and the proof verifies p and concludes whether y equals P(x). The goal of the solution is to render the verification of the proof as efficient as possible.
Verifiable Matrix Multiplication
It is a cryptographic scheme that enables a cloud provider to compute the multiplication of a given vector with the matrix and to prove to a user that the output is actually correct. The goal of the solution is to render the verification of the proof as efficient as possible.
It is a cryptographic scheme that enables a cloud provider to compute the multiplication of a given vector with the matrix and to prove to a user that the output is actually correct. The goal of the solution is to render the verification of the proof as efficient as possible.
Verifiable Matching of Biometric Templates
This primitive could be offered as a service to perform biometric authentication on trusted servers while preserving the privacy of the data. It could also be simply adapted to validate ID doc against trusted data sources
This primitive could be offered as a service to perform biometric authentication on trusted servers while preserving the privacy of the data. It could also be simply adapted to validate ID doc against trusted data sources
M24: so what?
March 2017 means M24 in our project timeline terminology. We have submitted four new deliverables and the second year of the project is over. So what?
So... many things really!
If you have been disconnected from latest project news, here's a few of them that you must become acquainted with:

March 2017 means M24 in our project timeline terminology. We have submitted four new deliverables and the second year of the project is over. So what?
So... many things really!
- Three General Assembly meetings were organised: one at Heidelberg (thanks to NEC for hosting it!) in March 2016, another one at Salzburg in September 2016, and in February of 2017 we met at Berlin by courtesy of SAP, (great host too!). These meetings always are great opportunities for the consortium to meet, keep updated on the progress achieved in the different work-packages, discuss on hot research topics and plan for the upcoming steps. And have a delicious business dinner in such a good company, of course!
- The SECODIC workshop, held during the ARES 2016 Conference in Salzburg, took place on the 31st of August 2016. The workshop was co-organised with project WITDOM, allowing us to present our current progress and achievements, learn from others' feedback and views and share thoughts with other researchers on the topic Secure and Efficient Outsourcing of Storage and Computation of Data in the Cloud. We are already planning to repeat the experience... more information will be revealed very soon!
- Work-package 2 on "Requirements and architecture for a Secure, Trusted and Efficient Cloud" finished in November 2016. Awwwww... We will miss you! The main outputs of this WP are the Use Cases descriptions, the Consolidated set of Requirements and the final TREDISEC Framework architecture.
- 2nd Project Review took place on October 27th 2016, and we successfully passed it. All deliverables reviewed were accepted so we could safely keep moving on! Some minor recommendations were made too, and we are dutifully following them... promise!
- A first Innovation Management report has been delivered in November 2016. The conclusions of the report show that, after one and a half years of work, the project's innovation health level is thriving, and the risk that the project goals lose relevance over time is negligible. Good job!
- A Business Development Plan workshop was held in Madrid in November 2016. Thanks to the Common Exploitation Booster support services coordinated by the H2020 Common Support Center of the European Commission, we counted with the guidance of an expert who helped the consortium to clarify a lot of open points regarding how to design a realistic and adecquate Business Model to better approach the exploitation strategy of the project results. The outcomes of this workshop served as input for deliverable D7.7 Business models for TREDISEC.
- Work-package 6 on "Development, delivery and evaluation of the TREDISEC framework" has started. That means the framework architecture and primitives designed during the first half of the project will become tangible very soon. The evaluation criteria to validate the project results in the context of the Use Cases has been defined too.
If you have been disconnected from latest project news, here's a few of them that you must become acquainted with:
Many milestones achieved, but quite a few to accomplish too. The third year of the project will see how the implementation of the security primitives perform in evaluation scenarios reproducing real world conditions, and how these indeed are able to meet the identified cloud security and functional requirements we committed to. The framework will be made available to ease primitive developers' and cloud providers' lifes. And the strategy for exploitation of project results and their sustainability's game plan will see the light.
We'll keep you posted!
How TREDISEC will contribute to data security and storage efficiency in the cloud
Cloud computing has changed both business and everyday life, that’s a fact. Its technological capabilities offer numerous opportunities to cut costs, drive business innovation, and enable new consumer services. On the other hand, a successful attack to critical cloud services, which might slow-down or interrupt services as well as leave data in-flight or at-rest completely exposed to non-authorized parties, could derive into contractual obligations or regulatory compliance violation, resulting in reputation, financial loss, and ultimately, even loss of lives in the case of health or defence critical systems. And suffering such an attack is not an unlikely possibility at all. Not anymore.

Cloud computing has changed both business and everyday life, that’s a fact. Its technological capabilities offer numerous opportunities to cut costs, drive business innovation, and enable new consumer services. On the other hand, a successful attack to critical cloud services, which might slow-down or interrupt services as well as leave data in-flight or at-rest completely exposed to non-authorized parties, could derive into contractual obligations or regulatory compliance violation, resulting in reputation, financial loss, and ultimately, even loss of lives in the case of health or defence critical systems. And suffering such an attack is not an unlikely possibility at all. Not anymore.
In recent years, there was an outbreak of data breaches and global surveillance programs, news about security breaches in large enterprises are sadly becoming normal. Some very recent and striking examples are “Year 0” revelations from Wikileaks about CIA and NSA hacking smartphones in March 2017, and the worst outbreak known so far: WannaCry’s global cyberattack happened just a few days ago.
“Year 0” unearthed the details of a massive surveillance program which was neither restricted to one geographical area (the WikiLeaks files have also revealed that the U.S. Consulate in Frankfurt is a major hacker outpost for the most important and sensitive operations), nor mitigated by the various security countermeasures already deployed within the targeted services.
More than 300,000 computers were infected by WannaCry in 150 countries, with special incidence in Russia, Taiwan, Ukraine and India, according to Czech security firm Avast. "Ransomware" has shifted from being a word only known by specialists to start the TV News.
In the TREDISEC project we target these problems, designing and developing technologies that give direct answer to specific and real security challenges. Doing this way, we will certainly create impact in existing businesses and contribute to generating new profitable business opportunities.
Inspired by the recent global surveillance events, we consider an omnipresent attacker, which can compromise the cloud infrastructure and the communication channels between cloud providers and their respective users. TheTREDISEC Security Primitives deals with the security and the privacy issues associated with the storage and processing of data on the cloud, ensuring the confidentiality and integrity of outsourced data in the presence of this powerful attacker who controls the entire network.
In addition, we want to ensure that our proposed security primitives will enable scalable and efficient storage at the cloud, by supporting data compression and data deduplication, and will provide the necessary means for cloud providers to efficiently search and process encrypted data, in settings where such functionality is required.

The complete Catalogue of Security Primitives Implementations developed in TREDISEC is available now in our website.
In June we are starting a series of monthly posts that expand our catalogue of primitives. Each primitive owner will give a glimpse of their primitive standing out its main features. We will kick-off the series with a post about the framework that combines and orchestrate the security primitives.
Biometric Features Extraction in the Encrypted Domain
This primitive could be used to prove the user/citizen/customer that some processing (like the liveness detection) has indeed been computed on the authentication data, thus enabling to check the conformance to (e.g. governmental) rules/standards.
This primitive could be used to prove the user/citizen/customer that some processing (like the liveness detection) has indeed been computed on the authentication data, thus enabling to check the conformance to (e.g. governmental) rules/standards.
MUSE
A multi-user searchable encryption solution that allows users (called writers) to outsource their encrypted documents. Afterwards, other users (called readers) can perform some word search operations without the need of re-downloading the entire document and only if they are authorized to do so.
A multi-user searchable encryption solution that allows users (called writers) to outsource their encrypted documents. Afterwards, other users (called readers) can perform some word search operations without the need of re-downloading the entire document and only if they are authorized to do so.
Secure Data Migration Service
This tool allows cloud customers to migrate relational SQL databases into the cloud such that confidentiality is provided against the service provider but the database can still be queried.
This tool allows cloud customers to migrate relational SQL databases into the cloud such that confidentiality is provided against the service provider but the database can still be queried.
Multi-Tenancy Enabled Encrypted Database
If data is deployed on a server in an untrusted environment (e.g. the cloud), the data owner might be afraid of honest-but-curious database administrators or other personnel or external attackers who have access to the server. Our processing mechanism uses adjustable query-based encryption: The data is encrypted in so called onion encryption layers where the weakest encryption schemes are the innermost layers, which are then encrypted with other encryption schemes.
If data is deployed on a server in an untrusted environment (e.g. the cloud), the data owner might be afraid of honest-but-curious database administrators or other personnel or external attackers who have access to the server. Our processing mechanism uses adjustable query-based encryption: The data is encrypted in so called onion encryption layers where the weakest encryption schemes are the innermost layers, which are then encrypted with other encryption schemes.
PoR
Proofs of Retrievability (PoR) are cryptographic proofs that enable a cloud provider to prove that the tenant can retrieve his file in its entirety. A tenant can ask the cloud provider to provide such proofs of a requested file without the need to download the file The aim of providing the PoR primitive is to provide strong assurance of storage integrity to the tenants.
Proofs of Retrievability (PoR) are cryptographic proofs that enable a cloud provider to prove that the tenant can retrieve his file in its entirety. A tenant can ask the cloud provider to provide such proofs of a requested file without the need to download the file The aim of providing the PoR primitive is to provide strong assurance of storage integrity to the tenants.
Advanced Encryption Resilient to Key-Leakage
The encryption primitive encrypts and partitions the file, in a way that the file can be decrypted only when all the partitions of the encrypted data as well as the decryption key are available.
The encryption primitive encrypts and partitions the file, in a way that the file can be decrypted only when all the partitions of the encrypted data as well as the decryption key are available.
Secure De-duplications
Files are encrypted on the client side before being uploaded to the cloud, and will be decrypted on the client side after being downloaded to local. The encryption key is kept by the clients. The encryption keys are acquired by the clients from some remote entity, in a privacy-preserving way that the remote entity is not able to infer or distinguish the file content from the requests from all clients, but this remote entity will ensure that the same file content will derive the same encryption key. Thanks to this feature, files across multiple clients can be de-duplicated.
Files are encrypted on the client side before being uploaded to the cloud, and will be decrypted on the client side after being downloaded to local. The encryption key is kept by the clients. The encryption keys are acquired by the clients from some remote entity, in a privacy-preserving way that the remote entity is not able to infer or distinguish the file content from the requests from all clients, but this remote entity will ensure that the same file content will derive the same encryption key. Thanks to this feature, files across multiple clients can be de-duplicated. Only one copy of a file with unique content (in its encrypted form) will be stored in the cloud server. When duplicated files are deleted, only the links of the ownership will be removed. The file copy in the cloud will be removed only when the file is unique across all clients.
Software Hardening (MEMCAT)
This mechanism includes a wide set of tools that ensures that an attacker has the smallest amount of resources at its disposal to attack a system. This is valuable because several zero-day exploits target unused features of the kernel.
This mechanism includes a wide set of tools that ensures that an attacker has the smallest amount of resources at its disposal to attack a system. This is valuable because several zero-day exploits target unused features of the kernel.
Vulnerability Discovery
This tool behaves like a classic fuzz tester, by supplying mutated input to a program and observing its behaviour. Often, mutated input leads to crashes, and the crashes reveal ways of exploiting the program. Standard fuzzers however do not take into account the distributed nature of some of the software that powers the cloud. The distributed fuzzer will be optimized for distributed programs and components. The output is a series of crash reports including back-traces and the developer/tester can manually intervene to fix the bug and harden the code.
This tool behaves like a classic fuzz tester, by supplying mutated input to a program and observing its behaviour. Often, mutated input leads to crashes, and the crashes reveal ways of exploiting the program. Standard fuzzers however do not take into account the distributed nature of some of the software that powers the cloud. The distributed fuzzer will be optimized for distributed programs and components. The output is a series of crash reports including back-traces and the developer/tester can manually intervene to fix the bug and harden the code.
IBM's PoW
A cryptographic protocol that regulates the interactions between a prover and a verifier. The protocol is usually executed in the context of a storage outsourcing scenario, where the prover is the client and the verifier is the (storage) service provider. The correctness property of PoW schemes require that the owner of a file will succeed in convincing the verifier of this fact.
A cryptographic protocol that regulates the interactions between a prover and a verifier. The protocol is usually executed in the context of a storage outsourcing scenario, where the prover is the client and the verifier is the (storage) service provider. The correctness property of PoW schemes require that the owner of a file will succeed in convincing the verifier of this fact.
Key Management for Secure Deduplication (OOPRF)
This scheme is intended to be used in a scenario where multiple users are using a storage system to store data.
This scheme is intended to be used in a scenario where multiple users are using a storage system to store data.
Container Isolation component
The Container Isolation module provides two functionalities: First, it implements a tool used to extract and encrypt a Docker container image layer in order to safely transfer it into a target Docker host. Second, it enables a container to store its data over encrypted storage mediums, in order to ensure that the confidential data cannot be retrieved by an adversary with access in the host’s storage backend
The Container Isolation module provides two functionalities: First, it implements a tool used to extract and encrypt a Docker container image layer in order to safely transfer it into a target Docker host. Second, it enables a container to store its data over encrypted storage mediums, in order to ensure that the confidential data cannot be retrieved by an adversary with access in the host’s storage backend
TPM-based Remote Attestation (TRAVIS)
Remote Attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network. The Remote Attestation generates the evidence of whether or not the untrusted cloud platform is running in the expected state, and therefore, the result of the service, application or VM image outsourced to the cloud is trustworthy.
Remote Attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network. The Remote Attestation generates the evidence of whether or not the untrusted cloud platform is running in the expected state, and therefore, the result of the service, application or VM image outsourced to the cloud is trustworthy.
Multi-tenancy Access Control (EPICA)
The aim of the primitive is to provide an enforcement component for distributed attribute-based access control (ABAC) policies that ensures that authorized users always get access to the selected cloud resource (either data or service) whilst the access is refused to malicious parties, in the context of a multi-tenant cloud infrastructure.
The aim of the primitive is to provide an enforcement component for distributed attribute-based access control (ABAC) policies that ensures that authorized users always get access to the selected cloud resource (either data or service) whilst the access is refused to malicious parties, in the context of a multi-tenant cloud infrastructure.
Logical Partitioning Hypervisor
Provides light-weight isolation on many-core platforms. Allows management of encrypted and integrity-protected virtual machine images.
Provides light-weight isolation on many-core platforms. Allows management of encrypted and integrity-protected virtual machine images.
Secure Deletion
The primitive provides secure deletion on an honest-but-curious cloud storage. Therefore, clients can store all the files on the cloud as usual, but still achieve secure deletion, which cannot be guaranteed otherwise. The solution is based on encryption.
The primitive provides secure deletion on an honest-but-curious cloud storage. Therefore, clients can store all the files on the cloud as usual, but still achieve secure deletion, which cannot be guaranteed otherwise. The solution is based on encryption.
ML-POR with MLKeygen
Message Locked PoR and Message locked key generation. This primitive enables clients to verify the retrievability of their files while also allowing file-based deduplication based on a dedicated message-locked key generation. Since all keying material are depending on the file itself the encryption and encoding of the files remain the same if the file is the same.
Message Locked PoR and Message locked key generation. This primitive enables clients to verify the retrievability of their files while also allowing file-based deduplication based on a dedicated message-locked key generation. Since all keying material are depending on the file itself the encryption and encoding of the files remain the same if the file is the same.
SPORT
De-duplication on the authenticators used for Proofs of Retrievability across multiple users. Relying on key-message homomorphic encryption, the cloud providers are able to merge the PoR authenticators generated by different users using different credentials and the merged authenticators is verifiable by all users.
De-duplication on the authenticators used for Proofs of Retrievability across multiple users. Relying on key-message homomorphic encryption, the cloud providers are able to merge the PoR authenticators generated by different users using different credentials and the merged authenticators is verifiable by all users.
MIRROR
Proofs of retrievability for data replications. It allows the data replication be handled by the cloud provider, who will then generate proofs of retrievability of these replicas upon user attestation.
Proofs of retrievability for data replications. It allows the data replication be handled by the cloud provider, who will then generate proofs of retrievability of these replicas upon user attestation.
Shared Ownership
Shared Ownership allows joint access control decisions on collaboratively created cloud data. In our work we present an instantiation of shared ownership that is more efficient than previous work and allows fair accounting through block-chains.
Shared Ownership allows joint access control decisions on collaboratively created cloud data. In our work we present an instantiation of shared ownership that is more efficient than previous work and allows fair accounting through block-chains.
Authenticated Encryption
Authenticated encryption with new security model and construction. StoA authenticated encryption with variable stretch is vulnerable to some attacks that misuse the variable stretch. A new security definition is proposed and followed by a new construction.
Authenticated encryption with new security model and construction. StoA authenticated encryption with variable stretch is vulnerable to some attacks that misuse the variable stretch. A new security definition is proposed and followed by a new construction.
Verifiable Storage
Verifiable storage allows a cloud customer to check whether her (Big) data is stored correctly at the cloud server provider. As previously mentioned, classical data integrity techniques are not suitable anymore since they require the customer to download the entire data together with the integrity proof computed by the cloud. TREDISEC tackles this specific problem and currently investigates existing solutions that can be classified into two categories: Proof of Data Possession (PDP) and Proof of Retrievability (PoR).
Requirements
- WP31-R1: Efficient storage verification
- WP31-R2: Data possession verifiability
- WP31-R3: Data extractability
- WP31-R4: Delegated verifiability
- WP31-R5: Public verifiability
Verifiable Ownership
To avoid client-side deduplication attacks, the new primitive called Proof of Ownership (PoW) was introduced with the aim of preventing leakage amplification in client-side deduplication. More specifically, the idea is that if an outside adversary somehow obtains a bounded amount of information about a given target user file F via out-of-band leakage, then the adversary cannot leverage this short information to obtain the whole file F by participating in client-side deduplication with the cloud storage server.
One of the main objectives of the project with respect to verifiability is the study of PoW protocols. There are indeed several open questions when it comes to this family of protocols, mostly revolving around performance and security. In addition, we plan to investigate PoW schemes that can be applied to encrypted data and/or data uploaded by participants that do not share mutual trust.
Requirements
- WP33-R1: Efficient ownership verification
- WP33-R2: Verifiable Ownership with data confidentiality
Content extracted from deliverable document D2.2 Requirements Analysis and Consolidation
Verifiable Computation
While storage integrity requirements address the integrity of outsourced data, computation integrity requirements address the correctness of outsourced computation.
Requirements
- WP32-R1: Computation integrity
- WP32-R2: Public verifiability
- WP32-R3: Public delegatability
- WP32-R4: Managing big databases
Content extracted from deliverable document D2.2 Requirements Analysis and Consolidation
Access control and policy enforcement
Access control is essential in protecting storage privacy. Customers must be able to trust the cloud service that only authorized parties can access their data. More complicated access control mechanisms provide extra or improved use cases for cloud storage. Additional policy enforcement solutions such as secure deletion give customers tighter control over their data, enhance their storage privacy and can be essential in order to comply with business regulations.
Requirements
- WP41-R1: Semantic and contextually constrained policy enforcement
- WP41-R2: Privacy-respectful policy enforcement
- WP44-R1: Secure deletion
- WP44-R2: Shared ownership
- WP44-R3: Assisted deletion
Resource isolation
To enforce resource isolation, systems may make use of access control and security policies. The entities enforcing these policies, such as hypervisors, operating system kernels, middleware or applications, are themselves vulnerable to attacks. Therefore, improving the security of such policy-enforcing-entities (monitors) improves the security guarantees provided by the policies. The main objective of the project is to design mechanisms that improve the security of monitors either through: (a) removing vulnerabilities present in the code base, (b) preventing such vulnerabilities from being reachable by attackers, or (c) in the presence of attacker-reachable vulnerabilities, preventing their exploitation.
Requirements
- WP42-R1: Improved resource isolation
- WP42-R2: Secure storage per tenant
Data Privacy
Cloud services introduce new security threats with respect to the confidentiality of the outsourced data. While the cloud providers are motivated to provide data confidentiality for their data storage services given the increasing security assurance demands from the cloud customers, they will also lose the advantage of optimizing their storage costs by de-duplicating the data once traditional encryption is applied to the data. TREDISEC aims to provide strong data confidentiality guarantees while benefiting from the various advantages of data deduplication in the cloud. On the one hand, we aim to devise novel schemes which ensure data confidentiality despite a powerful adversary that has access to the user's secret material: such schemes are defined as key-exposure resistant schemes. We also plan to propose techniques which support deduplication of data encrypted by different mistrusting principals (tenants, users).
Requirements
- WP43-R1: Data confidentiality
- WP43-R2: Resistance to key leakage
Content extracted from deliverable document D2.2 Requirements Analysis and Consolidation
Privacy preserving data outsourcing
Within TREDISEC, the original data of the data owner should be protected against unintended and unauthorized access, and data confidentiality should be enforced by means of encryption. The encryption of large data sets with one or multiple encryption schemes should be executed in a performance-optimised manner. At the same time, end-user application downtime needs to be minimised during the migration process in order to allow daily business operations to continue.
Requirements
- WP5-R1: Big Data confidentiality
- WP51-R1: Efficient initial encryption
- WP52-R1: Privacy preserving migration with minimum downtime
Privacy preserving processing
Privacy preserving processing deals with the design of mechanisms that enable the cloud to process encrypted data. Ideally, cloud providers should be able to conduct any complex operations on the outsourced data. While advances in fully homomorphic encryption are promising, they are still too computationally intensive to represent a viable solution for privacy preserving processing. This is why, in TREDISEC, we focus on a different line of research that aims at designing dedicated privacy preserving mechanisms for specific applications. More specifically, we address the problem of privacy preserving data processing for biometric data and privacy preserving word search:One of the most demanding operation for cloud application is word search. A data owner or another authorized third party should be able to search for some words over the data that has already been outsourced encrypted. The idea is to exploit the properties of the outsourced data and the functions we are interested in to come up with efficient security solutions that do not negatively impact the performances of cloud computing..
Requirements
- WP51-R2: Query analysis for optimised SQL statement execution over remotely stored encrypted data
- WP53-R1: Privacy preserving data processing
- WP53-R2: Search pattern privacy for word search
- WP53-R3: Access pattern privacy for word search
- WP53-R4: Performance / Efficiency at the client
- WP53-R5: Query expressiveness for word search
Content extracted from deliverable document D2.2 Requirements Analysis and Consolidation
The owner of outsourced data should be given the possibility to control who accesses her data and how. More specifically, the data owner should be allowed to share the ownership of her data, give read/write rights to users of her choice and finally revoke such rights at any point in time. Therefore, we envisage in TREDISEC to develop mechanisms for access policy enforcement. Given the multi-tenant nature of file sharing use-cases, they require solution that control and regulate data access.
Since the cloud provider possesses plenty of computational resources that the lay customer does not, it can perform complex operations on data very fast. This encourages customers not only to outsource storage but also to outsource data processing. To facilitate the adoption of TREDISEC security services, we should focus on how to reconcile existing data processing functionalities and the pressing requirements of data confidentiality and computation integrity.
We note here that this requirement is more related to the use cases dealing with big data storage and secure processing services, since, for the case of file sharing services the cloud provider is only supposed to store the data.
Most of the data outsourced to the cloud is prone to changes. Such changes include appending new data, modifying chunks of existing data, or deleting parts of the outsourced data. Besides the classical challenge of synchronization that cloud service providers should solve when multiple users update outsource concurrently, we should also ensure in TREDISEC that our security mechanisms work seamlessly in the presence of dynamic data. Namely, a cloud customer should not be impelled to download her (entire) data to perform a small change. Ideally, this requirement should be met in all the TREDISEC use-cases. However, in TREDISEC we prioritise the file sharing use-cases.
An important requirement that cloud service providers must meet is the requirement of data availability. Availability assures the cloud customer that she can download her (entire) data at her convenience. Although in the use-cases for big data storage and secure processing, the cloud customer is not supposed to ever download her data, we believe that in TREDISEC this requirement should be met for all the use-cases.
New whitepaper on cloud towards Free Flow of Data with the participation of TREDISEC
The Data Protection, Security and Privacy Cluster of EU-funded research projects working in those areas has released the Whitepaper on Cloud technology options towards Free Flow of Data.
The document is the result of the collaborative effort of the clustered projects, and it collects the technology outcomes from the projects that help to solve some of the issues raised by the Free Flow of Data (FFD) initiative of the Digital Single Market.
The Data Protection, Security and Privacy Cluster of EU-funded research projects working in those areas has released the Whitepaper on Cloud technology options towards Free Flow of Data.
The document is the result of the collaborative effort of the clustered projects, and it collects the technology outcomes from the projects that help to solve some of the issues raised by the Free Flow of Data (FFD) initiative of the Digital Single Market.
The Whitepaper briefly describes and provides references to the technologies, methodologies, models, and tools researched and developed by the projects mapped to the ten areas of work of the Free Flow of Data initiative. The aim is to facilitate the identification of the state-of-the-art of technology options towards solving the data security and privacy challenges posed by the Free Flow of Data initiative in Europe.
Currently 28 projects participate in the DPSP Cluster with a total EU funding of approximately €86M, corresponding to 19 projects funded in H2020 (€64M), 6 projects in FP7 (€17M), 2 projects in CIP (€5M). Among the, the MUSA (MUlti-cloud Secure Applications) project coordintated by Tecnalia and OPERANDO (Online privacy enforcement, rights assurance and optimization) project in which participate have contributed to the report.
You can read the document in the link below:
Whitepaper on Cloud technology options towards Free Flow of Data
The Data Protection, Security and Privacy Cluster of EU-funded research projects working in those areas has released the Whitepaper on Cloud technology options towards Free Flow of Data
.The document is the result of the collaborative effort of the clustered projects, and it collects the technology outcomes from the projects that help to solve some of the issues raised by the Free Flow of Data (FFD) initiative of the Digital Single Market
The Data Protection, Security and Privacy Cluster of EU-funded research projects working in those areas has released the Whitepaper on Cloud technology options towards Free Flow of Data
.The document is the result of the collaborative effort of the clustered projects, and it collects the technology outcomes from the projects that help to solve some of the issues raised by the Free Flow of Data (FFD) initiative of the Digital Single Market
.The Whitepaper briefly describes and provides references to the technologies, methodologies, models, and tools researched and developed by the projects mapped to the ten areas of work of the Free Flow of Data initiative. The aim is to facilitate the identification of the state-of-the-art of technology options towards solving the data security and privacy challenges posed by the Free Flow of Data initiative in Europe
.Currently 28 projects participate in the DPSP Cluster with a total EU funding of approximately €86M, corresponding to 19 projects funded in H2020 (€64M), 6 projects in FP7 (€17M), 2 projects in CIP (€5M). Among the, the MUSA (MUlti-cloud Secure Applications) project coordintated by Tecnalia and OPERANDO (Online privacy enforcement, rights assurance and optimization) project in which participate have contributed to the report
.Verifiable Document Redacting
Secure Deletion Primitive
We start this series of articles about the primitives developed in TREDISEC project with Secure Deletion.
The key feature of the Secure Deletion primitive is to allow users to retain more control over their data. Once a user decides to securely delete data, it is irrecoverably deleted. Thereby, secure deletion provides privacy and compliance with existing data retention laws. We provide a new multi-user secure deletion solution.

We start this series of articles about the primitives developed in TREDISEC project with Secure Deletion.
The key feature of the Secure Deletion primitive is to allow users to retain more control over their data. Once a user decides to securely delete data, it is irrecoverably deleted. Thereby, secure deletion provides privacy and compliance with existing data retention laws. We provide a new multi-user secure deletion solution.
Previous work has already studied secure deletion for different media and in different scenarios. Initially, secure deletion was studied for local storage media, such as classical hard drives. Later, due to the advances in technology, the focus shifted to flash-based storage media and as cloud storage became more popular, appropriate secure deletion solutions were devised. However, to the best of our knowledge all of these solutions are single-user or single-device solutions. Our solution provides secure deletion on collaborative cloud storage for one or more users using one or more devices.
In this diagram we can see how our solution has been depicted.
The group members want to collaboratively use the cloud storage, i.e. they want to upload, download, modify and delete files. In our solution we assume that the group members trust each other and are therefore not malicious. Each member uses its client app, that was previously configured by the administrator. The client app then translates the basic users commands into appropriate actions. If a user uploads a file, the client app securely encrypts it, stores the encrypted file on the cloud storage and informs the other members about the encryption key so that the cloud does not learn anything about the encryption key. Through a careful handling of the key over the file’s lifetime, secure deletion can be achieved.
Today, many companies use cloud storage for different tasks, as it allows fast and efficient collaboration between employees. Based on our solution, more advanced cloud applications can be developed that provide secure deletion. Note, that due to data privacy regulation, such a secure deletion solution is a necessity for many companies, if they want to leverage the cloud advantages.
5th General Assembly meeting of TREDISEC
The project consortium will meet for 2 days at Morpho's premises in Paris for the 5th General Assembly.
The project consortium will meet for 2 days at Morpho's premises in Paris for the 5th General Assembly.
The first day, Oct. 5th, will host a plennary session to report on the status of the different work packages and to discuss the strategy to undertake the last 6 months of the project.
On the second day, Oct. 6th, two parallel workshops will run devoted to WP6 and WP7 each. WP6 workshop will discuss on the status of the use case validation scenarios. WP7 workshop will focus on the exploitation and sustainability strategy of the project.
Secure Data Migration Service
The second primitive chosen to illustrate the results obtained along TREDISEC project in our corporate blog is Secure Data Migration service.
Our Secure Data Migration Service allows companies to securely outsource databases such as those used by enterprise resource planning software into the cloud. All sensitive data is stored encrypted in the cloud and all keying material for decryption is kept solely at the company. Despite encryption, our solution preserves the ability to execute arbitrary database queries.

The second primitive chosen to illustrate the results obtained along TREDISEC project in our corporate blog is Secure Data Migration service.
Our Secure Data Migration Service allows companies to securely outsource databases such as those used by enterprise resource planning software into the cloud. All sensitive data is stored encrypted in the cloud and all keying material for decryption is kept solely at the company. Despite encryption, our solution preserves the ability to execute arbitrary database queries.
Previous work focussed on how to store and query encrypted data in a relational database. For example, in adjustable encryption [0], every plaintext data column is encrypted multiple times in so called onions, whereby each onion consists of one or many encryption layers. Depending on the functionality required, the corresponding layer is exposed to the database server. We provide an important step which was previously missing: fast, scalable and efficient initial encryption of legacy data. Without this step, companies cannot transform to secure companies.
In this diagram we can see how our solution has been depicted.
Our data provisioning process is depicted in the diagram. In the Sensitivity Selection step, the data owner reviews its on-premise data and selects the sensitivity for each data column. The SQL Preparation step utilizes either historic or expected future SQL statements to calculate the best possible execution strategy. It is important to transfer as little data as possible to save network bandwidth and processing time. To this end, we perform an SQL query optimization by rearranging SQLs to achieve low transfer overhead while preserving the semantics of the original query. The Hot State Analysis step optimizes the initial encryption by removing encryption onions and layers if possible based on pre-recorded SQL statements. The Storage Optimization is step 4 and optimizes the storage space required at the cloud provider. We assume that the cloud provider uses a column-store database with dictionary compression and reduce probabilistic encryption to deterministic encryption where possible without losing security. The fifth step concludes the provisioning process by encrypting the legacy data in a Hadoop Cluster and transferring it into its new location at the cloud provider. This step uses the onion structures produced by the previous steps.
Our solution can be used by companies to securely outsource the database of their enterprise resource planning software into the cloud. It is secure, because the confidentiality of all sensitive data items is preserved via means of encryption and all keying material is kept locally at the company where it is not accessible by the cloud provider.
Keywords: Privacy-Preserving Data Outsourcing, Storage Efficiency
D5.1. Design of Provisioning Framework
TREDISEC will present EPICA at the Second Joint Workshop of the DPSP Cluster
TREDISEC will present by first time a demo of results at the Second Joint Workshop of DPSP Cluster that will be held in Armsterdam, next 19th of September. Javier García, from Atos, will present a demo of the Multi-tenancy Access Control primitive together with other projects belonging to the cluster as SWITCH, PASSWORD, UNICORN, RESTASSURED, PRISMACLOUD and MUSA.
TREDISEC will present by first time a demo of results at the Second Joint Workshop of DPSP Cluster that will be held in Armsterdam, next 19th of September. Javier García, from Atos, will present a demo of the Multi-tenancy Access Control primitive together with other projects belonging to the cluster as SWITCH, PASSWORD, UNICORN, RESTASSURED, PRISMACLOUD and MUSA.
The aim of the primitive is to provide an enforcement component for distributed attribute-based access control (ABAC) policies that ensures that authorized users always get access to the selected cloud resource (either data or service) whilst the access is refused to malicious parties, in the context of a multi-tenant cloud infrastructure (more information here: http://www.tredisec.eu/content/multi-tenancy-access-control-epica).
The Second Workshop of DPSP is scheduled within the workshop organized by CloudWATCH2 for SMEs, industry leaders and policy experts to think how the European cloud computing market will be shaped in Europe over the next 3 years. (http://www.cloudwatchhub.eu/cloudwatch-summit-agenda-now-online).
The dates scheduled are the following if you are interested in attending to this appointment:
1. DPSP Cluster Workshop . 19th Sep (13:30-17:15).
2. CLOUDWATCH2 MTRL Workshop. 19th Sep (17:30-19:00).
3. CLOUDWATCH2 Final Workshop. 20th Sep (10:00-16:30).
TREDISEC will participate at ISSE Conference: The Future of Digital Security and Trust
ISSE The Future of Digital Security and Trust is an annual conference focused on cybersecurity, privacy, identity and Cloud.
This year, coinciding with the final straight of the project, TREDISEC has organized a workshop to present the main results achieved along its duration within ISSE Conference.
The main objective of the session is to promote the results obtained by the project showing the main achievements regarding research, latest innovations and market trends.
ISSE The Future of Digital Security and Trust is an annual conference focused on cybersecurity, privacy, identity and Cloud.
This year, coinciding with the final straight of the project, TREDISEC has organized a workshop to present the main results achieved along its duration within ISSE Conference.
The main objective of the session is to promote the results obtained by the project showing the main achievements regarding research, latest innovations and market trends.
There are 3 presentations planned related to the project, of 10 minutes duration each one, to showcase TREDISEC results, from different perspectives:
(1) innovation: describing the novelty and relevance of the project main results.
(2) practical: a live demo to show how TREDISEC technologies perform in some representative use case scenarios.
(3) exploitation: outlining the applications to various business contexts and the go-to-market strategy.
The workshop is addressed to a varied audience, ranging from researchers in topics related to privacy, security & trust, but also policy-makers that are responsible of designing and promoting a strategy on secure cloud computing in the context of EU and industry experts or IT managers which have obligations or business interests to secure personal and/ or sensitive data.EPICA
EPICA (Efficient and Privacy-respectful Interoperable Cloud-based Authorization) is a software implementation that controls access to resources (either services or data) in multi-tenant cloud environments. EPICA supports an ABAC-based model that extends XACML policies to represent trust relationships between tenants (so called “tenant-aware XACML policies”) in order to govern cross-tenant access to shared cloud resources.

EPICA (Efficient and Privacy-respectful Interoperable Cloud-based Authorization) is a software implementation that controls access to resources (either services or data) in multi-tenant cloud environments. EPICA supports an ABAC-based model that extends XACML policies to represent trust relationships between tenants (so called “tenant-aware XACML policies”) in order to govern cross-tenant access to shared cloud resources.
The advances introduced by the primitive implementation, with regards to the current state of the art in the domain of Access Control for Cloud-based environments, are two-fold:
1) EPICA provides specific support for cloud requirements such as multi-tenancy, and is compatible with storage efficiency techniques (i.e. file-based deduplication and compression);
2) EPICA advances the existing implementations of XACML v3, building upon an existing Open Source implementation of the standard, extends it with new functionalities and improves existing ones for a full coverage of the XACML reference architecture.
Besides, EPICA supports high availability and performance deployments, implementing an efficient policy retrieval approach with scalable policy stores.
The architecture of EPICA has been designed taking into account interoperability and privacy concerns, so the information exchanged between the cloud provider and the user, required to perform authorization, remains minimal.
As can be seen in Figure 1 the security primitive has several different components. Some of them have been created from scratch while others have been extended from the original Open Source reference implementation.
EPICA adapts to existing cloud management systems by (a) enabling configuration of different options to adapt to a specific scenario: type of policy store, multi-tenancy model, high-performance mode, policy generation mechanism and distributed attributes mode; (b) the Policy Administration process is supported by a set of operations offered as a REST-full API (c.f. Figure 2); (c) the Policy Enforcement component (PEP) is offered in two forms (.jar file and web service) to facilitate deployment in different cloud environments; the primitive allows for a distributed deployment of the authorization engine and policy store following a pubSub architectural pattern.
EPICA fulfils end-to-end security requirements while preserving critical functional requirements of cloud computing, such as scalability, availability and high performance. Besides, the approach is applicable to Authentication and Authorization for Constrained Environments (ACE), such as IoT or 5G scenarios, where strong fine-grained mutual authentication and authorization schemes are critical to protect frequency and radio/communication resources, to deliver 5G networks services on demand and comply with different regulation constraints.
Keywords:: Security Requirements: Confidentiality; Cloud Functional Requirements: Multi-Tenancy; Cloud Non-Functional Requirements: High Performance, Usability, High Availability
Is there a “rowhammer” for MLC NAND Flash SSDs? An analysis of filesystem attack vectors
Dynamic Provable Data Possession Protocols with Public Verifiability and Data Privacy
Most relevant dissemination publications of Tredisec
- ROTE: Rollback Protection for Trusted Execution
http://www.tredisec.eu/content/rote-rollback-protection-trusted-execution
Name of the conference: USENIX Security 2017 - Securing Cloud Data under Key Exposure
http://www.tredisec.eu/content/securing-cloud-data-under-key-exposure
Name of the conference: IEEE Transactions on Cloud Computing
- ROTE: Rollback Protection for Trusted Execution
http://www.tredisec.eu/content/rote-rollback-protection-trusted-execution
Name of the conference: USENIX Security 2017 - Securing Cloud Data under Key Exposure
http://www.tredisec.eu/content/securing-cloud-data-under-key-exposure
Name of the conference: IEEE Transactions on Cloud Computing - A Leakage-abuse Attack Against ult-User Searchable Encryption
http://www.tredisec.eu/content/leakage-abuse-attack-against-ult-user-searchable-encryption
Name of the conference: The 17th Privacy Enhancing Technologies Symposium - Mirror: Enabling Proofs of Data Replication and Retrievability in the Cloud
http://www.tredisec.eu/content/mirror-enabling-proofs-data-replication-and-retrievability-cloud
Name of the conference: USENIX Security 2016 - Transparent Data Deduplication in the Cloud
http://www.tredisec.eu/content/transparent-data-deduplication-cloud
Name of the conference:ACM CCS 2015 - Efficient Techniques for Publicly Verifiable Delegation of Computation
http://www.tredisec.eu/content/efficient-techniques-publicly-verifiable-delegation-computation
Name of the conference: ASIACCS´16
MOST RELEVANT DISSEMINATION PUBLICATIONS OF TREDISEC
- ROTE: Rollback Protection for Trusted Execution
Conference: USENIX Security 2017
- Securing Cloud Data under Key Exposure
Conference: IEEE Transactions on Cloud Computing
- ROTE: Rollback Protection for Trusted Execution
Conference: USENIX Security 2017
- Securing Cloud Data under Key Exposure
Conference: IEEE Transactions on Cloud Computing
- A Leakage-abuse Attack Against ult-User Searchable Encryption
Conference: The 17th Privacy Enhancing Technologies Symposium
- Mirror: Enabling Proofs of Data Replication and Retrievability in the Cloud
Conference: USENIX Security 2016
- Transparent Data Deduplication in the Cloud
Conference:ACM CCS 2015
- Efficient Techniques for Publicly Verifiable Delegation of Computation
Conference: ASIACCS´16
- "Towards Shared Ownership in the Cloud" paper.
Has been accepted to IEEE transactions on Information Forensics and Data Security (TIFS)
IBM data ownership toolkit
Proof of Ownership (PoW) is a cryptographic protocol that regulates the interactions between a prover and a verifier. The protocol is usually executed in the context of a storage outsourcing scenario, where the prover is the client and the verifier is the (storage) service provider.

Proof of Ownership (PoW) is a cryptographic protocol that regulates the interactions between a prover and a verifier. The protocol is usually executed in the context of a storage outsourcing scenario, where the prover is the client and the verifier is the (storage) service provider.
Several PoW schemes have been proposed in the literature. While they are proved secure under similar security models, they differ widely in terms of the performance impact. With this framework we present a common set of APIs that can be used to integrate PoW schemes into a storage system. The system can choose dynamically which scheme to use depending on the context (e.g. premium users get the PoW solution that is less taxing on the client side). This way the storage administrator doesn’t have to choose which scheme to adopt when building the system; on the contrary, this determination can be done later at runtime.
In a POW scheme, a prover and a verifier interact. At first prover and verifier exchange short information about a file (e.g. the hash of a file). Then, prover and verifier engage in a cryptographic protocol with the purpose of establishing that the prover indeed owns the file. The correctness property of PoW schemes require that the owner of a file will succeed in convincing the verifier of this fact. The security property will guarantee that a malicious prover who is not in possession of the file will succeed in convincing the verifier with negligible probability even in the presence of a legitimate file owner who colludes with the adversary, subject to certain restrictions. Cryptographically speaking, the prover is allowed to access an oracle that provides information on the file. More specifically, the prover may submit the description of a function to the oracle, and the oracle will invoke the function on input the file and return the output of the function to the prover. The leakage is only bounded in terms of execution time of the function and size of its output. The protocol is supposed to be resilient even in presence of this oracle.
Today all storage providers employ data compression and deduplication as an important way to better utilize their storage backends. However, careless use of deduplication by a cloud storage provider exposes several security vulnerabilities. PoW schemes are the security mechanism to be used to counter this threat.
Keywords: Data Privacy, Privacy-preserving data outsourcing, Verifiable ownership
TREDISEC workshop @ISSE 2017 event: "How to reconcile cloud efficiency with security & privacy"
Project TREDISEC (www.tredisec.eu), which receives funding from the European Union’s Horizon 2020 (H2020) has organized a workshop session at ISSE 2017 – The Future of Digital Security and Trust event.

Project TREDISEC (www.tredisec.eu), which receives funding from the European Union’s Horizon 2020 (H2020) has organized a workshop session at ISSE 2017 – The Future of Digital Security and Trust event.
The TREDISEC workshop will be held in Deloitte premises, close to the Brussels Airport (Belgium), on November 15th 2017 at 11:30 The workshop will promote the results obtained by the project so far, present the next steps towards reaching the market and most importantly, gather feedback from the audience by fostering a dynamic discussion on hot research topics, most promising innovations and market trends in the Cloud Security field.
The ISSE 2017 event hosts many international, non-profit industry organizations which combine their resources, knowledge and information to create a two-days conference focusing on European public and private trust related to cybersecurity, privacy, identity and Cloud. The TREDISEC workshop is aimed to attract a diverse audience who are interested in these topics, but also on related emerging areas, such as new generation clouds, IoT and 5G.
Ms. Elena González, Communication Manager of the project, will chair the Workshop EU Projects 2 strand of the second day programme of the event. She will launch the workshop with a talk entitled “Key challenges in cloud security and how TREDISEC fits in”, followed by a project specific presentation of “TREDISEC project innovations and results”.
An interactive demo session will follow. First, Jose Fran Ruiz, who works in the project as Technical Project Manager, will present the TREDISEC Framework, which provides functionalities for managing, testing and using the security and functional solutions (the so-called security primitives) developed in the project. Second, Andreas Fischer, Senior Researcher SAP, will present one of the project use case scenarios, “Database Migration into a Secure Cloud”, and show how the TREDISEC contribute to enhance Cloud Services with a Secure Data Migration primitive.
To conclude the workshop, David Vallejo, Project Manager at Arsys Internet S.L., will explain the strategy to go to market with TREDISEC results and describe a sustainability strategy to keep alive the project beyond its official end date.
For more information about the workshop session and how to participate, please contact the workshop coordinator, Ms. Elena González at elena.gonzalez@atos.net
Also, find information at the ISSE event online space:
https://www.isse.eu.com/2017-isse-conference-programme-day-1/
New infographic: What TREDISEC offers you?
Infographic which illustrates the main results of the project:
- Framework
- Primitives
- Recipes
And what are the specific functionalities offered to each target of customers:
- Security engineer
- Developers of security solutions
- Integrators
- Cloud service providers

Infographic which illustrates the main results of the project:
- Framework
- Primitives
- Recipes
And what are the specific functionalities offered to each target of customers:
- Security engineer
- Developers of security solutions
- Integrators
- Cloud service providers
New infographic: TREDISEC Benefits
Infographic which illustrates the main benefits provided by TREDISEC for each target:
- Security engineers
- Developers of security solutions
- Integrators
- Cloud service providers

Infographic which illustrates the main benefits provided by TREDISEC for each target:
- Security engineers
- Developers of security solutions
- Integrators
- Cloud service providers
TREDISEC@ISSE17: "How to reconcile cloud efficiency with security & privacy"
Only four months left to finish TREDISEC!
It has been a long way, but the time has come. We are in the final countdown and we have obtained awesome results along this tough but exciting period. So let's start showing off!
In order to do that, we have chosen the conference ISSE: The future of Digital Security & Trust. The TREDISEC workshop took place in the morning of the second day of the conference and was titled "How to reconcile cloud efficiency with security & privacy". This is our mantra and we wanted to get across the message that TREDISEC provides the necessary means to achieve it.

Only four months left to finish TREDISEC!
It has been a long way, but the time has come. We are in the final countdown and we have obtained awesome results along this tough but exciting period. So let's start showing off!
In order to do that, we have chosen the conference ISSE: The future of Digital Security & Trust. In its 19th edition, the conference was held at Belgium Deloitte premises, an impressive location. Here on the right, you can see the interior coutyard of the building.
The TREDISEC workshop took place in the morning of the second day of the conference and was titled "How to reconcile cloud efficiency with security & privacy". This is our mantra and we wanted to get across the message that TREDISEC provides the necessary means to achieve it.

Elena González (ATOS), Communication Manager of the project, gave an overview of the main objectives of the project. She explianed how TREDISEC's innovations contribute to enlarge and excel the EU Market Cloud Security landscape. Innovation is a key feature of TREDISEC and we have done a significant effort to keep ahead of the game.

But we also wanted to show tangible results too, so next was the turn for demonstrations:
Jose Ruiz (ATOS), TREDISEC Arquitecture Design leader presented the TREDISEC framework, showing the main functionalities with a focus on the primitive testing and deployment features. Right after, Andreas Fischer (SAP), leader of WP5 Processing and Encryption , presented one of the primitives of the project catalogue , the "Secure Data Migration Service" in the context of Use Case 6: "Database Migration into a Secure Cloud" .
Last but not least, David Vallejo (ARSYS), leader of WP7 "Exploitation, Dissemination and Communication" , explained how TREDISEC aligns with the market and described our go-to-market approach.
We received very valuable feedback from the audience, since several questions were posed throughout the entire session and some comments and suggestions were collected from the questionnaires shared. We'll make sure all these are taken into account in the remaining activities.
In the upcoming months and until the end of the project, we still have some important tasks to accomplish.
The technical validation of the framework and primitives in the context of four industrially relevant scenarios, on the one hand, will give us a realistic view on the maturity and viability of our results.
On the other hand, we need to go a step further in the definition of the strategy for exploitation of these results. We have several options already over the table, currently under discussion, and need to come to an agreement to outline a sustainable path, for both the framework and the primitives, which guarantees a successful entrance in the market.
Dear reader, if you have read until this point...Wish us good luck!
Sure we will reach our goals with the help of our fantastic team.
Message-Locked Proof of Retrievability (ML-PoR)
ML-PoR is a proof of retrievability scheme that enables a cloud user to verify the correct storage of her outsourced data while allowing the cloud to perform secure deduplication whenever there is redundancy.

ML-PoR is a proof of retrievability scheme that enables a cloud user to verify the correct storage of her outsourced data while allowing the cloud to perform secure deduplication whenever there is redundancy.
As opposed to existing PoR mechanisms, ML-PoRis compatible with cloud functional operations namely data reduction through secure deduplication. Therefore, the PoR encoding of a given file does not prevent the cloud from deduplicating redundant data. At the same time, the user is always able to verify the retrievability of her file even if this one is deduplicated.
ML-PoR aims at consolidating PoR with file-level deduplication by devising a generic technique to make a PoR scheme compatible with secure deduplication. In particular, the secret key used by the underlying PoR is derived from the file content thanks to a server-aided message-locked key generation protocol named ML-KeyGen, so that users owning identical files get the same keying material without any interaction among users. Moreover, in order to ensure deduplication, in addition to the same keying material the client should also use of the same parameters for all the operations performed during the PoR pre-processing stage, (eg. ECC parameters, the encryption algorithm, etc.). As there is no modification with respect to the underlying PoR protocol, we only illustrate the newly proposed ML-Keygen in the following diagram.
ML-PoR can help any cloud storage application to offer some means to guarantee the correct storage of users’ data while still being able to take advantage of data reduction technology.
Container Isolation
Container Isolation component preserves confidentiality of sensitive data in a containerized virtual environment. By exploiting Docker’s layered filesystem users can securely manipulate images throughout their life cycle. The component can secure both data on disk, by encrypting/decrypting on the fly, and data migration by enhancing the image distribution process.
Container Isolation component preserves confidentiality of sensitive data in a containerized virtual environment. By exploiting Docker’s layered filesystem users can securely manipulate images throughout their life cycle. The component can secure both data on disk, by encrypting/decrypting on the fly, and data migration by enhancing the image distribution process.
A layered filesystem provides a flexible mechanism for easy versioning and efficient distribution of container images. However, current systems do not protect sensitive data from malicious privileged users, since encryption is not natively supported. Container Isolation component adds to the benefits of such a filesystem by providing data isolation in a Docker virtual environment. Our system offers transparent mechanisms to secure sensitive data by adding encryption/decryption capabilities to the image creation and distribution process.
Our solution relies on the underlying union filesystem to provide encryption/decryption for data-at-rest and data-on-the-move in Docker containers. The data-on-the-move module encrypts and distributes securely the sensitive layers of a Docker image (fig.2): the public parts-layers of an image are pulled from the Docker image registry but the topmost layers with the sensitive data are transferred in encrypted form and are decrypted in the destination. The data-at-rest module encrypts/decrypts on-the-fly sensitive on-disk data, transparently for the user: the corresponding layer is mapped to an encrypted volume leveraging inherent mechanisms of different Cloud Computing platforms.
Containers are constantly gaining ground in virtualization as a lightweight/efficient alternative to hypervisor-based VMs, with Docker being a popular representative. In virtual environments (ie. Cloud Computing platforms), where multiple users are occupying shared resources, confidentiality of user data is important. We provide a mechanism to securely manage Docker images with sensitive data in such environments.
MUSE: Multi-User Searchable Encryption
MUSE is a searchable encryption scheme that enables cloud users to upload their encrypted data and authorize other users to perform lookup queries over these data. The main privacy guarantee that MUSE offers is that it does not disclose the content of the data and the queries to the cloud.

MUSE is a searchable encryption scheme that enables cloud users to upload their encrypted data and authorize other users to perform lookup queries over these data. The main privacy guarantee that MUSE offers is that it does not disclose the content of the data and the queries to the cloud.
MUSE is one of the first solutions that is suitable to the multi-user context. Furthermore, compared to existing multi-user searchable encryption solutions, this new primitive does not leak sensitive information and takes into account user-cloud collustions. Furthermore, MUSE remains scalable with respect to the number of users querying documents.
This new MUSE protocol makes use of new building blocks such as Oblivious Transfer or Garbled-Bloom Filters and is the first to ensure a high level of privacy in presence of users colluding with the CSP. Thanks to the use of an additional third party called Query Mutiplexer, whenever a user named as reader is able to efficiently search a large number of indexes, while each of these indexes is encrypted with the secret key of a different user named as writer. The new protocol is illustrated in the following figure. Writers send encrypted indexes to DH and authorizations to QM, and readers send trapdoors to QM. When QM receives a trapdoor t_(r,s) from reader r, QM multiplexes this query into several ones each of them destined to one encrypted index. DH further processes each of these queries and send the (encrypted) results to QM who further filters out negative results and send the set of indices the search query actually matched.
MUSE could be very useful in a scenario whereby individuals outsource their data to a cloud server and allow some third party services (such as recommendation systems) to perform a search over their data in order to increase their quality of service for example. Meanwhile the cloud storage server does not discover any information about the processed data.
PerfectDedup
PerfectDedup is a new scheme that enables the cloud to securely deduplicate redundant data when it is encrypted. PerfectDedup relies on the use of different encryption techniques based on the popularity of the data: Popular data are protected under convergent encryption and can therefore be deduplicated; unpopular data segments which are likely to remain unique are protected under semantically-secure symmetric encryption.

PerfectDedup is a new scheme that enables the cloud to securely deduplicate redundant data when it is encrypted. PerfectDedup relies on the use of different encryption techniques based on the popularity of the data: Popular data are protected under convergent encryption and can therefore be deduplicated; unpopular data segments which are likely to remain unique are protected under semantically-secure symmetric encryption.
Compared to existing solutions which mostly support file-level deduplication, PerfectDedup achieves secure deduplication at the block level which sometimes leads to higher storage space savings compared to file-level deduplication. Furthermore, compared to a similar solution which also differentiates data protection depending on data popularity, PerfectDedup significantly reduces the storage and communication overhead and optimizes the computational cost as it relies on symmetric encryption techniques only.
PerfectDedup relies on a popularity-based secure deduplication solution that defines different encryption techniques for popular and unpopular data. In order to use the adequate encryption technique, a user first needs to discover the popularity of her data segment. Hence PerfectDedup defines a novel secure lookup protocol that leverages a secure perfect hash function (PHF) which given an input set of n convergently encrypted data segments, finds a collision-free hash function that maps the ID of each encrypted data segment to an integer.In addition to the cloud server, PerfectDedup introduces a semi-trusted server called Index Service (IS) which is responsible for keeping track of unpopular blocks and therefore helps the user handle the popularity transition that is the phase in which a block becomes popular and the convergent encrypted version needs to be uploaded.
PerfectDedup can be used by any cloud storage applications and will offer data confidentiality for end-users while allowing deduplication and hence optimized their storage space.
Feature Extraction Over Encrypted Data
Privacy and legal reasons prevent companies operating biometric systems from taking advantage of cloud computing, unless data are encrypted. An attracting property would be to be able to perform computations such as extracting features on these encrypted data . Our primitive enables to apply some signal processing algorithm over encrypted data.
Privacy and legal reasons prevent companies operating biometric systems from taking advantage of cloud computing, unless data are encrypted. An attracting property would be to be able to perform computations such as extracting features on these encrypted data . Our primitive enables to apply some signal processing algorithm over encrypted data.
Our primitive is linked to two notions: homomorphic encryption, which enables to compute over encrypted data without decrypting and neural networks (NN), which are state of the art machine learning algorithms used to classify images. State of the art implementation CryptoNets proposes a tailored NN to extract features from small encrypted images and classify them. However, the proposition does not scale with the size of the image due to the choices made to define the NN. Our primitive improves over CryptoNets, enabling to deal with larger NN. We can thus obtain a more accurate classification or deal with larger images.
The high-level view of the scheme is given on the figure. Depending on the parameters of homomorphic encryption scheme, the number of successive multiplications that can be performed over a ciphertext is limited. If a fixed threshold is reached, decryption cannot be performed. Neural Networks (NN) are not compatible with homomorphic encryption because they use a non-linear operation (called ReLU) that involves a huge number of multiplication. Our primitive replaces ReLU by a small degree polynomial and propose a way to compute this polynomial that enables an efficient training of the NN and ends with a NN which accuracy compares well with the same NN without replacing the ReLU.
Managing big identity databases containing millions of records is difficult, especially when updates are needed. In biometric systems, updates typically reprocess new templates from the stored raw images, the goal being an improvement of the system accuracy. Updates take time and require renting in-house hardware. Our primitive is a first step toward outsourcing encrypted biometrics since it enables to perform updates.
Verifiable Biometric Matching
Several applications rely on biometrics to authenticate users. Since performing a biometric matching requires advanced technical knowledge, companies may prefer to delegate this computation to a specialized cloud server. Our Verifiable Biometric Matching enhances the confidence in a delegated biometric matching by adding a proof that the result of the computation is correct.
Several applications rely on biometrics to authenticate users. Since performing a biometric matching requires advanced technical knowledge, companies may prefer to delegate this computation to a specialized cloud server. Our Verifiable Biometric Matching enhances the confidence in a delegated biometric matching by adding a proof that the result of the computation is correct.
Verifiable Computing is a recent research area which aims at proving that a computation was correctly executed, making no assumption about the possible failures nor using specialized hardware. We perform the proof computation using a general purpose verifiable computation scheme which produces a short proof that the biometric matching was correct. In existing schemes, the bottleneck is the prover time. Thus, in order to decrease the proof computation time, we represent efficiently the computation to verify and achieve performance improvement for the prover.
All the general purpose verifiable computing systems can verify computations with elements belonging to a large finite field; typically, the field elements are about 250 bit long.
We leverage the shorter size of the input elements in a biometric matching (16 or 32 bit each) to define a multiplexing technique: we pack several input elements of the matching into a single field element and thus perform several multiplications simultaneously. This results in a more efficient computation and therefore decreases the prover time compared with using existing verifiable computing systems. We fine-tuned the parameters to pack the maximum of inputs while still being able to recover the final result.
Online banks begin to rely on biometrics for smartphone applications and therefore delegate the matching process to specialized companies. Adding a proof of correctness to the computation enables the bank to audit the company to which the biometric matching was delegated and thus incentivizes the delegation of biometric authentication.
CLEARBOX
ClearBox allows a storage service provider to transparently attest to its customers the deduplication patterns of the (encrypted) data that it is storing. By doing so, ClearBox enables cloud users to verify the effective storage space that their data is occupying in the cloud, and consequently to check whether they qualify for benefits such as price reductions, etc
ClearBox allows a storage service provider to transparently attest to its customers the deduplication patterns of the (encrypted) data that it is storing. By doing so, ClearBox enables cloud users to verify the effective storage space that their data is occupying in the cloud, and consequently to check whether they qualify for benefits such as price reductions, etc
The literature features a large number of proposals for securing data deduplication in the cloud. All these proposals share the goal of enabling cloud providers to deduplicate encrypted data stored by their users. Such solutions allow the cloud provider to reduce its total storage, while ensuring the confidentiality of stored data. By doing so, existing solutions increase the profitability of the cloud, but do not allow users to directly benefit from the savings of deduplication over their data.
ClearBox relies on gateway to orchestrate cross-user file-based deduplication prior to storing files on (public) cloud servers. ClearBox ensures that files can only be accessed by legitimate owners, resists against a curious cloud provider, and enables cloud users to verify the effective storage space occupied by their encrypted files in the cloud (after deduplication). By doing so, ClearBox provides its users with full transparency on the storage savings exhibited by their data; this allows users to assess whether they are acquiring appropriate service and price reductions for their money—in spite of a rational gateway that aims at maximizing its profit.
ClearBox ensures a transparent attestation of the storage consumption of users whose data is effectively being deduplicated — without compromising the confidentiality of the stored data.
ClearBox employs a novel Merkle-tree based cryptographic accumulator which is maintained by the gateway to efficiently accumulate the IDs of the users registered to the same file within the same time epoch. Our construct ensures that each user can check that his ID is correctly accumulated at the end of every epoch. Additionally, our accumulator encodes an upper bound on the number of accumulated values, thus enabling any legitimate client associated to the accumulator to verify (in logarithmic time with respect to the number of clients that uploaded the same file) this bound.
ClearBox is the first complete system which enables users to verify the storage savings exhibited by their data. We argue that ClearBox motivates a novel cloud pricing model which promises a fairer allocation of storage costs amongst users—without compromising data confidentiality nor system performance. We believe that such a model provides strong incentives for users to store popular data in the cloud (since popular data will be cheaper to store) and discourages the upload of personal and unique content. As a by-product, the popularity of files additionally gives an indication to cloud users on their level of privacy in the cloud; for example, the user can verify that his private files are not deduplicated—and thus have not been leaked.
Framework
WHAT IS THE TREDISEC FRAMEWORK?
The TREDISEC framework is a piece of software that facilitates the Cloud Security technology providers to manage the entire lifecycle of the TREDISEC Primitives and Recipes.
WHAT IS THE TREDISEC FRAMEWORK?
The TREDISEC framework is a piece of software that facilitates the Cloud Security technology providers to manage the entire lifecycle of the TREDISEC Primitives and Recipes.
The framework also supports consumers of such technologies in locating and identifying them in a simple and effective manner, as well as in testing and deploying those in a specific cloud-based environment, in order to fulfil consumers’ own requirements.
The TREDISEC framework has been released as Open Source Software, under the Apache 2.0 license.
RESOURCES
SERVICES AND CONTACT INFORMATION
The End
March 2018. It has been three years since we officially launched the TREDISEC Project.
In our backpack, more than 25 security primitives, 9 recipes and a Framework to support the development, integration, use and adoption of these solutions by different stakeholders of the Cloud Security market.
March 2018. It has been three years since we officially launched the TREDISEC Project.
In our backpack, more than 25 security primitives, 9 recipes and a Framework to support the development, integration, use and adoption of these solutions by different stakeholders of the Cloud Security market.
But these are just the visible face of the seven breakthrough innovations in Cloud Security developed in the project. The valuable knowledge shared, the expertise gained and the opportunities created provide an extraordinary feeling of a job well done.
Have a quick insight into primitives looking at the dedicated Blog posts series .
Get acquainted with our Framework through presentations, videos and the various infographics available at the specific web space. You can even have a first person experience with a demo running framework instance, just follow the link provided and sign up. Additionally, and because the TREDISEC framework is released Open Source under Apache 2.0 license, you can see and even download the code to explore it at your own pace.
We would be very happy to hear you make use of it for your own purposes!
But we know you, and you do not simply content yourself with seeing the surface. So please, help yourself and deep dive into the details of a high quality research work, described in deliverables and multiple scientific publications accepted at top-ranked conferences such as USENIX Security, IEEE Transactions on Cloud Computing, ACM CCS or ASIACCS .
But, wait a minute… Is this really the end?
If you are still not satisfied and would like to know about our plans for the future. If you wish to explore any collaboration and continue the work that will take all these results to the next logical step, please contact us!
Catalogue of Recipes

Recipe | Primitives Included | Description |
---|---|---|
Verifiable Integrity of Virtual Systems | TRAVIS | This recipe includes a packaged version of the TRAVIS primitive, which provides the following functionalities: (i) continuous verification of the integrity of the outsourced business services/applications and the underlying infrastructure, (ii) monitoring and reporting about Integrity aspects in Cloud Services Agreements. Read More |
Access Control and Multi-tenancy | EPICA | EPICA (Efficient and Privacy-respectful Interoperable Cloud-based Authorization) is a software implementation that controls access to resources (either services or data) in multi-tenant cloud environments. This Recipe leverages Docker to allow a fully automated deployment and testing of EPICA through the framework. Read More |
Container Isolation | Container Isolation | This recipe secures Docker image manipulation throughout its life cycle: The creation, storage and usage. Read More |
Secure storage and deletion | Traditional techniques like encryption and backups address availability and confidentiality concerns but lack transparency on resource usage and assurance that data is made inaccessible when its owner so wishes. Secure Storage and Deletion recipe enables such improved transparency and control for data owners. Read More |
|
Secure verifiable storage |
This recipe offers cloud storage providers the advantage of ensuring a secure and confidential storage of customers' data while satisfying the cloud storage provider' scalability requirements and optimizing their storage savings. Read More |
|
Secure biometric matching | A Cloud Service using this Recipe will guarantee that the privacy of the data is preserved as all operations occur in the encrypted domain, and by providing reliable cryptographic proofs for each biometric transactions. Read More |
|
Secure storage with proofs of retrievability | Secure Multi-Cloud Storage emerges as the centerpiece of tomorrow’s scalable and secure storage technologies, combining the use of multiple cloud storage services and aggressive data deduplication techniques to further reduce storage cost with security and reliability at an unmatched level. Read More |
|
Robust cloud platform |
|
This recipe consists of primitives designed to mitigate the risk of compromise significantly, leading to cloud platforms that are robust against cyber exploitation. Read More |
Verifiable Computations | This recipe provides some means to cloud users to verify the correctness of operations executed (outsourced) at the cloud server's side. Read More |
These Recipes are a joint effort of various TREDISEC partners. If you are interested in knowing more, please contact us!
Recipe - Verifiable Integrity of Virtual Systems
This recipe includes a packaged version of the TRAVIS primitive, which makes use of a well-known TPM function to make a claim about certain properties of a target system by supplying evidence to an appraiser over a network.

This recipe includes a packaged version of the TRAVIS primitive, which makes use of a well-known TPM function to make a claim about certain properties of a target system by supplying evidence to an appraiser over a network.
In our specific case, the client (appraiser) outsourced the execution of a business service or application to a Cloud Service Provider (CSP) and needs to verify that the state of the Cloud Provider platform (target) where the business service/application is running remains the same as expected, at any point in time. Moreover, the client demands an evidence of such unchanged state, i.e. an evidence of the integrity of the remote virtual platform, and expects to be able to verify it without the CSP’s meddling.
The primitive provides the following functionalities:
- continuous verification of the integrity of the outsourced business services/applications and the underlying infrastructure,
- monitoring and reporting about Integrity aspects in Cloud Services Agreements.
The TRAVIS Recipe provides detailed instructions and scripts to support its correct deployment and configuration in a virtual environment. Additionally, the Recipe includes a testing infrastructure that leverages Vagrant, to automatically deploy and configure test VMs equipped with TRAVIS agents. This testing infrastructure permits configuration of several parameters, allowing for a complete performance evaluation.
Access Control and Multi-tenancy
EPICA (Efficient and Privacy-respectful Interoperable Cloud-based Authorization) is a software implementation that controls access to resources (either services or data) in multi-tenant cloud environments.
EPICA fulfils end-to-end security requirements while preserving critical functional requirements of cloud computing, such as scalability, availability and high performance. The EPICA Recipe leverages Docker to allow a fully automated deployment and testing through the framework.

EPICA (Efficient and Privacy-respectful Interoperable Cloud-based Authorization) is a software implementation that controls access to resources (either services or data) in multi-tenant cloud environments.
EPICA supports an ABAC-based model that extends XACML policies to represent trust relationships between tenants (so called “tenant-aware XACML policies”) in order to govern cross-tenant access to shared cloud resources.
EPICA fulfils end-to-end security requirements while preserving critical functional requirements of cloud computing, such as scalability, availability and high performance. Besides, the approach is applicable to Authentication and Authorization for Constrained Environments (ACE), such as IoT or 5G scenarios, where strong fine-grained mutual authentication and authorization schemes are critical to protect frequency and radio/communication resources, to deliver 5G networks services on demand and comply with different regulation constraints.
The EPICA Recipe leverages Docker to allow a fully automated deployment and testing through the framework.
Recipe - Container Isolation
This recipe secures Docker image manipulation throughout its life cycle: The creation, storage and usage of a Docker image is backed by a data-at-rest mechanism, which maintains sensitive data encrypted on disk and encrypts/decrypts them on-the-fly in order to preserve their confidentiality at all times, while the distribution and migration of images is enhanced with a mechanism that encrypts only specific layers of the file system that need to remain confidential and ensures that only legitimate key holders can decrypt them and reconstruct the original image.

Containers are constantly gaining ground in the virtualization landscape as a lightweight and efficient alternative to hypervisor-based Virtual Machines. Most of them and particularly Docker rely on union-capable file systems, where any action performed to a base image is captured as a new file system layer. This strategy allows developers to easily pack applications into Docker image layers and distribute them via public registries. Nevertheless, this image creation and distribution strategy does not protect sensitive data from malicious privileged users (e.g., registry administrator, cloud provider), because encryption is not natively supported.
This recipe secures Docker image manipulation throughout its life cycle: The creation, storage and usage of a Docker image is backed by a data-at-rest mechanism, which maintains sensitive data encrypted on disk and encrypts/decrypts them on-the-fly in order to preserve their confidentiality at all times, while the distribution and migration of images is enhanced with a mechanism that encrypts only specific layers of the file system that need to remain confidential and ensures that only legitimate key holders can decrypt them and reconstruct the original image.
Through a rich interaction with recipe the audience will experience first-hand how sensitive image data can be safely distributed and remain encrypted at the storage device throughout the container’s lifetime, bearing only a marginal performance overhead.
Recipe - Secure storage with proofs of retrievability
NEC’s Secure Multi-Cloud Storage emerges as the centerpiece of tomorrow’s scalable and secure storage technologies.
Our solution combines the use of multiple cloud storage services and aggressive data deduplication techniques to further reduce storage cost with security and reliability at an unmatched level.

NEC’s Secure Multi-Cloud Storage emerges as the centerpiece of tomorrow’s scalable and secure storage technologies.
Our solution combines the use of multiple cloud storage services and aggressive data deduplication techniques to further reduce storage cost with security and reliability at an unmatched level.It uniquely supports unprecedented guarantees on confidentiality and availability of stored data at a cost comparable to simple cloud storage services.
We extend the storage range of current datacenter services without incurring high maintenance and storage costs, which is ideal for Enterprise and Government customers. Our solution ensures that the stored data is always available, is repaired in case of any partial data loss, and is protected from the strongest of adversaries (including state-level adversaries), without compromising system performance or usability.
Recipe - Robust Cloud Platform
This recipe consists of primitives designed to mitigate the risk of compromise significantly, leading to cloud platforms that are robust against cyber exploitation.

Since modern crypto schemes are highly effective, attackers use methods of cyber exploitation to compromise the involved software stack and obtain either cleartext data or sensitive key material directly at the source, essentially bypassing the strong security guarantees provided by using encryption.
This recipe consists of primitives designed to tackle the issue from three angles: vulnerability discovery, attack surface reduction and software hardening.
Complementing each other, they mitigate the risk of compromise significantly, leading to cloud platforms that are robust against cyber exploitation.
Recipe - Secure verifiable storage
This recipe offers cloud storage providers the advantage of ensuring a secure and confidential storage of customers' data while satisfying the cloud storage provider' scalability requirements and optimizing their storage savings.

This recipe offers cloud storage providers the advantage of ensuring a secure and confidential storage of customers' data while satisfying the cloud storage provider' scalability requirements and optimizing their storage savings.
Thanks to this recipe, cloud storage 'providers will be able to offer storage services with high confidentiality and security guarantees (no need to access customers' cleartext data) towards their customers without incurring additional storage costs.
Recipe - Secure Storage and Deletion
This recipe offers cloud storage providers the advantage of ensuring a secure and confidential storage of customers' data while satisfying the cloud storage provider' scalability requirements and optimizing their storage savings.

Data storage has been studied for decades.
Traditional techniques like encryption and backups address availability and confidentiality concerns but lack transparency on resource usage and assurance that data is made inaccessible when its owner so wishes.
Secure Storage and Deletion recipe enables such improved transparency and control for data owners.
Recipe - Secure biometric matching
A Cloud Service using this Recipe will guarantee that the privacy of the data is preserved as all operations occur in the encrypted domain, and by providing reliable cryptographic proofs for each biometric transactions.

As digital transactions become a central part of economic activities, the usage biometric authentication is a growing trend to secure these processes.
The management of sensible biometric data requires to consider carefully the privacy aspects and to be able to manage a huge number of transactions while being able to guarantee the integrity of the transactions.
A Cloud Service using the TREDISEC recipes will address these concerns by guaranteeing that the privacy of the data is preserved as all operations occur in the encrypted domain, and by providing reliable cryptographic proofs for each biometric transactions.
Recipe - Verifiable Computations
This recipe provides some means to cloud users to verify the correctness of operations executed (outsourced) at the cloud server's side.

This recipe provides some means to cloud users to verify the correctness of operations executed (outsourced) at the cloud server's side.
Thanks to this recipe, cloud service providers can provide improved transparency and integrity guarantees and hence increase the trust level of their customers on their services.
Verifiable Document Redacting
D1.7 Final Innovation management report
D3.3 - Complete Design and Evaluation of Verifiability mechanisms
This deliverable overviews the complete specification and evaluation of nine different TREDISEC primitives that enable a cloud customer to remotely verify the correctness of cloud operations including storage, processing. Thanks to these primitives, cloud users acquire more confidence to outsource their storage and processing operations to the cloud. On the other hand, thanks to their compatibility with cloud functional operations such as file deduplication or data replication, cloud servers can maintain the cost efficiency of current infrastructures and offer these new security guarantees at the same time. The proposed primitives are the following:
- Two verifiable storage primitives, namely ML-PoR and SPORT that enable a cloud server to guarantee the correct storage of customers’ data while being able to perform file deduplication to achieve storage savings. ML-PoR basically is a generic solution that extends traditional PoR (proof of retrievability) schemes to make them compatible with file-level deduplication. It leverages a key server in order for all cloud users generating the same PoR parameters to encode the to-be-outsourced PoR encoded data. This way, cloud servers will still be able to perform deduplication. On the other hand, SPORT is a new PoR that transparently supports multi-tenancy with deduplication by enabling different cloud users to share the same PoR tags in order to verify the integrity of the same file. SPORT introduces a stronger adversary model.
- One primitive, Mirror that provides the cloud customers the guarantee that the cloud correctly keeps multiple replicas of their data in addition to the retrievability guarantee for the original data and its replicas. Unlike previous schemes, Mirror outsources the replica generation function to the cloud server and makes use of cryptographic puzzles to prevent a malicious cloud from meeting the replication guarantee while not actually storing these replicas.
- A proof of ownership primitive that allows a cloud server to verify that a user actually owns a file without the need for transferring it over the network. This immediately enables a secure client-side deduplication and thus achieves bandwidth savings for the storage of redundant data. The proposed primitive, OOPRF, makes use of an oblivious pseudo-random function in order for the user not to reveal any information about the file but still prove its ownership. An open-source implementation of two existing PoW solutions has been provided.
- Three verifiable computation primitives that help customers efficiently verify the correctness of some outsourced operations, namely: polynomial evaluation, matrix multiplication, and biometric matching. While the first two primitives make use of some simple algebraic properties of the original operations, the verifiable biometric matching operation optimizes an existing verifiable computation protocol to be compatible with the inner product operation.
- A verifiable document redacting primitive that empowers cloud users to easily remove some part of their already signed document without having an impact on the validity of the signature. Thanks to this new primitive, users will not disclose private information of the document that does not need to be shared with the destined party.
- A system integrity verification primitive, TRAVIS, which makes use of a Trusted Platform Module technology to achieve remote attestation of virtual cloud systems.
D4.2 - A Proposal for Resource Isolation in Multi-Tenant Storage Systems
D4.3 - A Proposal for Data Confidentiality and Deduplication
Cloud storage services have become an integral part of our daily lives. With more and more people operating multiple devices, cloud storage promises a convenient means for users to store, access, and seamlessly synchronize their data from multiple devices. With the ever increasing amount of data produced worldwide, the cloud offers a cheaper and more reliable alternative to local storage. Existing cloud service providers such as Amazon S3, Microsoft Azure, or Dropbox guarantee a good trade-off between quality of service and cost effectiveness.
Cloud storage services have become an integral part of our daily lives. With more and more people operating multiple devices, cloud storage promises a convenient means for users to store, access, and seamlessly synchronize their data from multiple devices. With the ever increasing amount of data produced worldwide, the cloud offers a cheaper and more reliable alternative to local storage. Existing cloud service providers such as Amazon S3, Microsoft Azure, or Dropbox guarantee a good trade-off between quality of service and cost effectiveness. Most existing cloud storage providers rely on data deduplication in order to significantly save storage costs by storing duplicate data only once — thus saving storage costs. The cloud has also gained many clients among SMEs and large businesses that are mainly interested in storing large amount of data while minimizing the costs of both storage and infrastructure management/maintenance.
While benefits of cloud storage are clear, there are many issues that have not been fully solved. The first problem is to ensure data confidentiality when it is outsourced on the cloud. Even though cloud services relied on encryption mechanisms to guarantee data confidentiality, the necessary keying material was acquired by means of backdoors, bribe, or coercion lead to data compromise. Existing solutions are not performance efficient and cause overhead, especially to large files. The second problem is about securing data deduplication (over encrypted data). The third problem is about information leakage associated with data deduplication on a storage server. Even if the underlying client-side encryption is secure we can show that the storage provider can still acquire considerable information about the stored files without knowledge of the encryption key.
This deliverable presents our novel solutions to address the above problems. Summaries of contributions are follows. More details can be found in sections included in the deliverable and in our publications.
To provide confidentiality of data stored in the cloud, we study data confidentiality against an adversary which knows the encryption key and has access to a large fraction of the ciphertext blocks. To this end, we propose Bastion, a novel and efficient scheme that guarantees data confidentiality even if the encryption key is leaked and the adversary has access to almost all ciphertext blocks. We analyse the security of Bastion, and we evaluate its performance by means of a prototype implementation. We also discuss practical insights with respect to the integration of Bastion in commercial dispersed storage systems. Our evaluation results suggest that Bastion is well-suited for integration in existing systems since it incurs less than 5% overhead compared to existing semantically secure encryption modes.
Regarding transparent data deduplication in the cloud, we propose two novel solutions: ClearBox and PerfectDedup. ClearBox enables cloud users to verify the effective storage space that their data is occupying in the cloud, and consequently to check whether they qualify for benefits such as price reductions ClearBox is secure against malicious users and a rational storage provider, and ensures that files can only be accessed by their legitimate owners. We evaluate a prototype implementation of ClearBox using both Amazon S3 and Dropbox as back-end cloud storage. Our findings show that our solution works with the APIs provided by existing service providers without any modifications and achieves comparable performance to existing solutions. On the other hand, PerfectDedup enables the cloud to securely detect and deduplicate redundant data blocks while these are encrypted. PerfectDedup implements different encryption techniques based on the popularity of the data. Popular data are assumed to be less sensitive and shared among a large number of users and are therefore protected under convergent encryption only, whereas unpopular data segments which are likely to remain personal and unique are encrypted with semantically-secure symmetric encryption. We have implemented a prototype of this new mechanism and evaluated its performance. We show that compared to existing solutions, PerfectDedup incurs less storage and communication overhead. Additionally, we also devise a new key generation protocol that enables cloud users to encrypt redundant data with the same encryption key. This new message-locked key generation protocol provides better security guarantees compared to existing protocols.
With respect to information leakage in deduplicated storage systems, we address this problem and analyse information leakage associated with data deduplication with respect to a curious storageserver. We show that even if the data is encrypted using a key not known by the storage server, the latter can still acquire considerable information about the stored files and even determine which files are stored. We validate our results both analytically and experimentally using a number of real storage datasets.
D5.3 - Implementation of Provisioning, Outsourcing and Processing Frameworks
This report overviews the complete specification and the evaluation of four different TREDISEC primitives that enable a cloud server to process data while not being able to access its content. Thanks to these primitives, cloud customers can delegate their computationally expensive operations to the untrusted cloud server and hence benefit from the performance advantages offered by this new technology while preserving data privacy. These four primitives are the following:
Configuration tool for privacy preserving Data provisioning and Outsourcing
We propose a configuration management tool to help security administrators securely outsource their encrypted relational SQL database into the untrusted cloud server. During the migration of the data, this tool analyses all utility and security constraints, detects conflicts between them, if any, and offers some conflict resolution procedures. This new tool improves the performance and effectiveness of the entire data provisioning and outsourcing framework described in previous deliverables D5.1 and D5.2.
Multi-User Searchable Encryption
This primitive addresses the problem of searchable encryption in the multi-user context whereby multiple users outsource their encrypted files and the corresponding indices into the cloud and allow multiple other users to query them while not revealing any sensitive information to the cloud server. This primitive was previously introduced in D5.2. In this deliverable we provide its complete specification and evaluate its security and performance. We also identify a serious privacy leakage that almost all existing multi-user searchable encryption solutions suffer from and define a dedicated security model that needs to be taken into account during the design of a new multi-user searchable encryption scheme.
Authenticated Encryption
Authenticated encryption (AE) is a symmetric-key encryption mechanism that in addition to confidentiality also delivers integrity and authenticity. It has been shown in the literature that AE solutions can be considered as essential building blocks for verifiable searchable encryption solutions as the one that were presented in the previous deliverable (D5.2). In this deliverable, we propose to improve the effectiveness of existing AE schemes by supporting variable-length tags per key for ciphertext expansion without sacrificing functional and security properties. To this end, we provide a formal definition for the notion of nonce-based variable-stretch AE (nvAE) and further propose some extensions to existing solutions to securely achieve this property.
Privacy preserving feature extraction for Biometrics
This new primitive enables a cloud customer to outsource a classification algorithm in the context of deep neural networks. We assume that the cloud customer already obtained a trained neural network model and would like to outsource the classification operation to the cloud while keeping the underlying sensitive information private. Indeed, the cloud should neither discover the input to the classification algorithm not its output (the resulting classification label). To achieve the privacy requirement, the proposed solution uses a fully homomorphic encryption (FHE) scheme while keeping the number of multiplication operations low. The underlying algorithmic building block (namely the ReLU activation function) is therefore approximated into a low degree polynomial which makes the neural network compatible with FHE and at the same time without much losing its accuracy.
D6.1 - TREDISEC framework implementation
The TREDISEC project has two main objectives:
- Design and develop solutions that fulfil both security and functional requirements of cloud-based systems
- Develop a framework that supports users in designing, managing and using such solutions.
The solutions are offered through the framework in the form of security primitive patterns, security primitive implementations and TREDISEC Recipes. Deliverables D2.3 and D2.4 described the architecture design of the TREDISEC framework, and detailed the lifecycle of security primitives, in their three flavours, and how the framework supports that, providing different functionalities and specific features to the four user roles identified, namely: TREDISEC Security Admin, TREDISEC end-user, Security Expert engineer, Security Technology Provider.
Deliverable D6.1 is a software implementation of the TREDISEC framework architecture design, as it is described in its final version in D2.4. The present document is a description of the actual software, which is available, as a stable version at M30, from the following sources:
- GitLab repository of source code: hosted by Atos. The framework software source code can be obtained from URL: https://gitlab.atosresearch.eu/ari/tredisec_wp6/tags/tredisec_framework_.... Access must be requested in advance, by contacting TREDISEC project coordinator: beatriz.gallego-nicasio@atos.net
- TREDISEC Framework instance running at the Test environment: hosted by GRNET. The framework software is deployed, up and running, publicly available at the following URL: https://tredisec.dev.grnet.gr/. Credentials to access and use the framework can be requested by contacting TREDISEC project coordinator: beatriz.gallego-nicasio@atos.net
Following, we describe in detail the TREDISEC Framework software implementation, starting with the roles we support and how they should use the framework, the functionalities offered , making especial emphasis in three processes: packaging (critical for the creation of actionable security primitive patterns, security primitive implementations and TREDISEC Recipes), primitives testing and deployment.
The testing of security primitive implementations and TREDISEC Recipes, which can either focus in functional requirements (e.g. correct functionality of the solution, etc.) or performance (e.g. increase or reduction of time of processing after deploying the solution) is supported by the framework by making available the so-called TREDISEC Testing Environments (TTEs). These TTEs are basically virtual environments (VMs) that users of the framework can use to test the capabilities of the primitives before actually downloading/using them in their own Cloud environments. These TTEs can be also used for deploying TREDISEC recipes and play around with them, e.g. by connecting via ssh.
Additionally we provide technical information of the software implementation building blocks, technologies used, communication channels and interfaces exposed, and procedures to build, install and configure your own instance of the TREDISEC Framework.
Finally, we include the conclusions and future work we will perform in the latest stage of the project together with the initial status and expected functionality.
D6.3 - Use cases development and Deployment
In the context of the TREDISEC project there are six distinct use cases. Also, the different project partners have also developed a number of security primitives. In this deliverable, we describe how the use-case partners have integrated different primitives into their cloud infrastructures in order to fulfil the requirements identified at the beginning of the project.
In the context of the TREDISEC project there are six distinct use cases. Also, the different project partners have also developed a number of security primitives. In this deliverable, we describe how the use-case partners have integrated different primitives into their cloud infrastructures in order to fulfil the requirements identified at the beginning of the project. Specifically, in use-case 1 (“storage efficiency with security”), GRNET has integrated two primitives: one that provides users the means to prove that they really own a specified file on the cloud and another that provides secure file deduplication. In use-case 2 (“multi-tenancy and access control”), GRNET again employed a primitive to isolate and secure resources from malicious users. ARSYS integrated three primitives in use-case 3 (“optimised WebDav service for confidential storage”) involving secure file deduplication, secure deletion and multi-tenant access control. In use-case 4, MPH integrated two primitives related to verifiable matching of biometric templates and TPM-based remote attestation. MPH also employed biometric feature extraction over encrypted domains in the context of use-case 5. Finally, SAP integrated in its infrastructure a primitive that provides secure data migration.
Furthermore, we describe the specifics of each integration; the threats that the primitives address in the context of each use-case and finally discuss the numerous challenges, interesting observations and lessons learned. Notably, in most cases, more than one primitive was integrated. Also, there were cases where the testing environments that were produced, were connected with the TREDISEC Framework (meaning they are available as testing environments through it).
D6.4 - Final Evaluation report
This deliverable evaluates different TREDISEC primitives in the context of six Use Cases and a framework. Each of the Use Cases and the Framework represent a scenario where we describe how different primitives have been evaluated within their test environments and the framework itself, testing new functionalities and security solutions. The overall goal is to check and validate whether requirements that were identified at the beginning of the project are met. So according to D6.3 [1], we describe here how each of the scenarios (6 Use Cases and the framework) have been tested, what methodologies have been used (as of described in D6.2 [2]), how all the works related to the evaluation process were planned and executed (Use Cases processes, test cases, results, etc.), and finally, if the results we obtain are in line with expected results and fulfil all requirements.
Specifically, Use Case 1 evaluates two different primitives (PoW and PerfectDedup) integrated into GRNET's cloud infrastructure, where both address specific threats that exist in the infrastructure without interfering in each other’s functionality and not affecting the storage service in any way. GRNET also tests in Use Case 2 the integration of the Container Isolation primitive, and in all cases success requirements are met and explained, preventing different attacks to secure the resources of the users in the cloud.
Use Case 3 consists of a set of functionalities and security solutions including multitenancy access control, (file based) secure deduplication and secure deletion primitives, and was tested with a cloud storage product too. Together with the primitive owners, Arsys integrated the primitives and tested them against UC3 requirements with dedicated user accounts. Each of the primitives showed the benefits of these solutions and fulfils Use Case requirements. A good opportunity to further develop these primitives was identified to enable them working together, sharing resources and interoperability.
Use Case 4 consists of an authentication protocol where a user authenticates to a service provider by the mean of a biometric comparison delegated by the service provider to a third party. Two primitives were integrated: a verifiable matching of biometric templates and a TPM-based remote attestation. Benchmarks were performed to validate some of the criteria, and functional and security requirements of the Use Case were evaluated. As a conclusion, all the evaluation criteria and all the mandatory requirements were validated. A second Use Case from IDEMIA, Use Case 5, consisting of a biometric database that, for privacy and legal reasons, should be encrypted, so the problematic raised by this Use Case is to be able to apply updates over encrypted data. Due to the primitive's lack of efficiency, another Use Case was fully implemented, namely the classification task of digit images, and it is discussed how the various experiments done with this replacement Use Case can be extrapolated to assess the maturity of the original Use Case and its potential usability in the future.
Use Case 6 concerns the migration of legacy data of an enterprise resource planning (ERP) application into an encryption-enabled database hosted at a cloud service provider. A "TPC-H ERP application" demonstrating a typical ERP scenario was developed for the activities of a wholesale supplier, who manages, sells and distributes products worldwide. To achieve the Use Case goal, the Secure Data Migration Service primitive is used for a convenient and performant migration. Using interviews with experts and quantitative reports, all evaluation criteria were assessed and, besides all mandatory requirements, two optional requirements were fulfilled.
Finally, the framework is the main frontend that TREDISEC stakeholders will use to interact with security primitives and recipes and thus, the validation focused on assessing how stable and easy-to-use the platform is. GRNET and ATOS built five complementary teams that were instructed in the use of the framework features and in its technical implementation details in order to properly answer questionnaires and interviews. The feedback received was analysed by ATOS and GRNET to conclude that all evaluated criteria is assessed above average (over 3, in a 1 to 5 scale). Six evaluation criteria (related to business requirements) have not been validated due to time constraints. We decided to discard these 6 criteria in favour of assessing requirements aiming at achieving a higher quality, adaptable, scalable, interoperable and usable framework.
D7.8 - Exploitation report and long term sustainability strategy
This document reports on all plans and activities for the exploitation of TREDISEC results by the consortium partners. It focuses on marketing strategies and business opportunities for the deployment of security solutions developed by TREDISEC.
The main purpose of this document is to inspect the technology put forth by TREDISEC from the point of view of exploitation, particularly in terms of business value. As the report shows, the results of the project offer cutting-edge security solutions which have the potential to greatly improve the European cloud businesses and to generate new ones.
The document summarizes the business plan undertaken by the TREDISEC consortium as well as the research and development activities pursued by the partners towards realizing this plan. It includes the Exploitation Strategy agreed upon the partners, which illustrates the exploitation principles and methodology, IPR management, and key exploitable results; a detailed analysis of the current cloud market and how the key exploitable results of TREDISEC could affect it; the business model developed within the consortium and corresponding exploitation strategy pursued by all the partners jointly; the specific exploitation strategies adopted by the partners individually; and a sustainability strategy to ensure long-term impact of the TREDISEC results.
We conclude that the results achieved by TREDISEC promise to positively contribute the European cloud market, and are capable to boost the large-scale adoption of cloud products and outsourced services, possibly generating new business in the cloud services landscape.
D7.6 - Final Dissemination and Communication activities reporting
This deliverable D7.6 Final Dissemination and Communication Activities Reporting aims to provide an overview of the communication and dissemination activities carried out by TREDISEC during the third project year, starting on April 1st 2017 to March 31st 2018 (from M25 to M36).
Besides, due to the fact that it is the final communication and dissemination report of the project, it includes a balance about the work performed along its three years duration, and a list of KPIs to assess if this work is aligned with the initial objectives outlined in D7.3Communication Plan.
Both reports of activities are separated distinguishing between communication and dissemination due to the different nature of their objectives. Meanwhile communication is more focused to promote the project itself, dissemination is addressed to make known the project results within the scientific community.
The activities reported in communication have been classified in the following groups:
- Graphic identity and Branding: use of graphic elements which comprises the visual identity of the project.
- Web Platform: project website and social networks metrics (LinkedIn and Twitter)
- Press and campaigns: press releases launched by the projects, articles for specialized publications and mentions in other media.
- Events: conferences, workshops, or meetings where the project has been promoted.
- Collaborations with other R&D projects/ Platforms, Forums: specific collaborations to share knowledge and look for synergies with other projects and platforms.
The activities reported in dissemination are mainly:
- Publications in refereed conferences, workshops and journals.
- Keynotes speeches & Workshops organization
- Whitepapers
The most remarkable KPIs after three years are the following ones: