Projects

ANTAREX


The main goal of the ANTAREX project (Sept 2015 - Aug 2018) is to provide a breakthrough approach to map, runtime manage and autotune applications for green and heterogeneous High Performance Computing systems up to the Exascale level.
One key innovation of the proposed approach consists in introducing a separation of concerns (where self-adaptivity and energy efficient strategies are specified aside from the application functionalities) promoted by the definition of a Domain Specific Language (DSL) inspired by aspect-oriented programming concepts for heterogeneous systems. The new DSL will be introduced for expressing the adaptivity/energy/performance strategies and to enforce at runtime application autotuning and resource and power management.
The goal is to support the parallelism, scalability and adaptability of a dynamic workload by exploiting the full system capabilities (including energy management) for emerging large-scale and extreme-scale systems, while reducing the Total Cost of Ownership (TCO) for companies and public organizations.

View ANTAREX official website.
Professors:
  1. Prof Giovanni Agosta
  2. Prof Gianluca Palermo
PhD students:
  1. Stefano Cherubin
  2. Davide Gadioli
  3. Emanuele Vitali
  1. C Silvano et al. The ANTAREX approach to autotuning and adaptivity for energy efficient HPC systems. Proceedings of the ACM International Conference on Computing Frontiers, pp. 288-293, 2015
  2. C Silvano et al. AutoTuning and Adaptivity appRoach for Energy efficient eXascale HPC systems: the ANTAREX Approach. Design, Automation, and Test in Europe
  3. C Silvano et al. ANTAREX--AutoTuning and Adaptivity appRoach for Energy Efficient eXascale HPC Systems. IEEE 18th International Computational Science and Engineering (CSE), 2015


MANGO

The essential objective of MANGO (Oct 2015 - Sept 2018) is to achieve extreme resource efficiency in future QoS-sensitive HPC through ambitious cross-boundary architecture exploration. The research will investigate the architectural implications of the emerging requirements of HPC applications, aiming at the definition of new-generation high-performance, power-efficient, deeply heterogeneous architectures with native mechanisms for isolation and quality-of-service. To achieve such ambitious objectives, MANGO will avoid conservative paths. Instead, its disruptive approach will challenge several basic assumptions, exploring new many-core architectures specifically targeted at HPC. The project will involve many different and deeply interrelated mechanisms at various architectural levels:
  1. Heterogeneous computing cores
  2. Memory architecture
  3. Interconnect
  4. Runtime resource management
  5. Power monitoring and cooling
  6. Programming models

1 1

In particular, to gain a system-wide understanding of the deep interplay of mechanisms along the PPP axes, MANGO will explore holistic proactive thermal and power management aimed at energy optimization, creating a hitherto inexistent link between hardware and software effects and involving all layers modeling in HPC server, rack, and datacenter conception. Ultimately, the combined interplay of the multi-level innovative solutions brought by MANGO will result in a new positioning in the PPP space, ensuring sustainable performance as high as 100 PFLOPS for the realistic levels of power consumption (<15MWatt) delivered to QoS-sensitive applications in large-scale capacity computing scenarios. Particularly relevant for current European HPC strategies, the results achieved by the project will provide essential building blocks at the architectural level enabling the full realization of the long-term objectives foreseen by the ETP4HPC strategic research agenda.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 671668 Visit MANGO official website

1 1

Professors:
  1. Prof William Fornaciari
  2. Prof Giovanni Agosta
Research Fellows:
  1. Dr Giuseppe Massari
  2. Dr Davide Zoni
PhD students:
  1. Federico Reghenzani
  2. Simone Libutti
  3. Anna Pupykina
  4. Michele Zanella
  1. J Flich et al. The MANGO FET-HPC Project: An Overview. 2015 IEEE 18th International on Computational Sciences and Engineering
  2. J Flich et al. Enabling HPC for QoS-sensitive applications: The MANGO approach. 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 702-70
  3. A Pupykina, G Agosta. Optimizing Memory Management in Deeply Heterogeneous HPC Accelerators. 46th International Conference on Parallel Processing Workshops (ICPPW), pp. 291-300, 2017
  4. J Flich et al. MANGO: Exploring Manycore Architectures for Next-GeneratiOn HPC Systems. 2017 Euromicro Conference on Digital System Design, Special Session on European Project in Digital System Design, pp. 478-485, Vienna 2017.
With the introduction of multicore CPUs, systems began to exploit the parallelism to achieve never seen before levels of performance and concurrency. Moreover, the silicon miniaturization process has led to the proliferation of energy-efficient embedded devices in many contexts. The advent of the Internet-of-Things (IoT) era has led to scenarios in which such devices are spread over the environment, collecting data from sensors or from other devices, or acting to modify the surroundings. On the other side, Cloud Computing opened the doors to the possibility of processing the collected data on remote high-performance infrastructure, in order to perform more complex data analysis. In this context, the amount of data produced at the “edge of the network” can be huge. This means that data processing may require a lot of network bandwidth and computing resources from the Cloud, with a direct impact on performance and costs. To mitigate this side-effect, we may think about performing a partial data processing on edge devices, reducing the amount of Cloud processing and data transfers. However, due to the resource-constrained nature of low-power devices, this is not always possible. The need for a halfway solution, consisting of more powerful nearby devices, has therefore led to the introduction of the “Fog Computing” paradigm. This intermediate layer has to provide better response times, especially for new emerging real-time application scenarios, thanks to the physical proximity to edge devices, and higher performance capabilities. This new approach introduced new challenges regarding security, fault tolerance, task and data allocation strategies. In this sense, the idle resources of mobile devices and smart objects, sharing the same network, can become part of the Fog. However, this requires that both application and system are properly managed. In particular, applications need to be modular so that they can be split into tasks that the computation can be performed on multiple devices. On the device side, a resource management strategy is required to allocate the available resources to incoming and running tasks, according to different strategies based on specific optimization metrics (performance, timing constraints, power consumption, etc…) and based on the status of the device (e.g., availability, energy budget, capabilities, etc…). By exploiting the MANGO project outcomes, we developed a set of frameworks aiming at managing the “Computing Continuum”, starting from the Edge/Fog layers. This includes a programming model library (libMANGO), which allows the development of multi-tasking applications, consisting of multiple kernels, that can be transparently deployed on an HPC infrastructure, as well as on a distributed system of high-end embedded devices. The deployment (or offloading) is transparently managed by the BarbequeRTRM, according to specific resource management policies and application requirements, relying on the BeeR framework. This is the abstraction layer towards the distributed nature of the system, i.e., it is the component in charge of actually performing data transfers and task offloading.
  1. MANGOLIBS Git repository
  2. HiPEAC 2020 Tutorial: A Run-time Managed Programming Approach to Computing Continuum (Slides)

M2DC


M2DC (Jan 2016 - Dec 2018) targets the development of a new class of energy-efficient TCO-optimized appliances with built-in efficiency and dependability enhancements. The appliances will be easy to integrate with a broad ecosystem of management software and fully software-defined to enable optimization for a variety of future demanding applications in a cost-effective way.
The M2DC server platform will enable customization and smooth adaptation to various types of applications, while advanced management strategies and system efficiency enhancements (SEE) will be used to achieve high levels of energy efficiency, performance, security and reliability. The M2DC middleware will provide a data centre capable abstraction of the underlying heterogeneity of the M2DC Server. On top of that it allows to deploy variable, optimized appliances including, e.g., photo finishing systems, IoT data processing, cloud computing and HPC.

1 1

This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 688201.
View M2DC official website.

1 1

  1. A Oleksiak et al. M2DC–Modular Microserver DataCentre with Heterogeneous Hardware. Microprocessors and Microsystems, 2017
  2. M Cecowski et al. The M2DC Project: Modular Microserver DataCentre. Euromicro Conference on Digital System Design (DSD), pp. 68-74, 2016
  3. MKA Oleksiak et al. Data centres for IoT applications: The M2DC approach. Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS), 2016

RECIPE


RECIPE (REliable power and time-ConstraInts-aware Predictive management of heterogeneous Exascale systems) is a new FETHPC-02-2017 – Transition to Exascale Computing – project lead by Politecnico di Milano – Dipartimento di Elettronica, Informazione e Bioingegneria with a total funding of 3,285,300 €. The Project Coordinator is prof. William Fornaciari, the Project Technical Manager is prof. Giovanni Agosta and the duration is 36 months: May 2018 – April 2021. RECIPE, in addition to other running projects of the HEAP Lab (M2DC, MANGO) contributes to positioning of POLIMI as one of the biggest cluster of projects on HPC/Exascale computing in Europe. RECIPE provides: a hierarchical runtime resource management infrastructure optimizing energy efficiency and minimizing the occurrence of thermal hotspots, while enforcing the time constraints imposed by the applications, and ensuring reliability for both time-critical and throughput-oriented computation.

1 1

A number of goals have been identified, the more relevant are: increase in energy efficiency (performance/watt) of 25% with an improvement of 15% of MTTF due to proactive thermal management; energy-delay product improved up to 25%; 20% reduction of occurring in fault executions with recovery times compatible to real-time performance and full exploitation of available resources under non-saturated conditions. The consortium is aggregating some of the most important players in Europe: POLIMI is providing expertise on resource management and programming models as well as scientific coordination; EPFL, the leading provider of thermal models for HPC; UPV, one of the key innovators in optimized interconnection networks, and CeRICT, providing expertise on accelerators; as well as two supercomputing centers: BSC, one of the leading HPC providers in Europe with the MareNostrum, classed 13th in the Top 500 in June 2017, and PSNC, another Top 500 HPC center in Poland; a research hospital from Switzerland, CHUV, and an SME active in product design and development, IBTS, which provide effective exploitation avenues through industry-based use cases.  View the RECIPE official Website

1 1

Professors:
  1. Prof William Fornaciari
  2. Prof Giovanni Agosta
Research Fellows:
  1. Dr Giuseppe Massari
  2. Dr Davide Zoni
PhD students:
  1. Luca Cremona
  2. Federico Reghenzani
  3. Anna Pupykina
  4. Michele Zanella

SafeCOP


SafeCOP (April 2016 - March 2019) is a project that targets the so-called Cooperating Cyber-Physical Systems (CO-CPS), that is systems that rely on wireless communication, have multiple stakeholders, use dynamic system definitions (openness) and operate in unpredictable environments. No single responsible stakeholder can be identified in these scenarios. This means that safe cooperation relies on wireless communication and security is an important concern. SafeCOP will provide an approach to CO-CPS’ safety assurance, thus allowing their certification and development. In particular, the project will define a runtime manager to detect abnormal behaviors at runtime, triggering, if needed, a safe degraded mode. SafeCOP will also develop methods and tools to certify cooperative functions and offer new standards and regulations to certification authorities and standardization committees.

1 1

The advantages include:
  1. Lower certification costs
  2. Increased trustworthiness of wireless communication
  3. Better management of increasing complexity
  4. Reduced effort for verification and validation
  5. Lower total system costs
  6. Shorter time to market
  7. Increased market share
View SafeCOP official website.

1 1

  1. A Agneessens et al. Safe cooperative CPS: A V2I traffic management scenario in the SafeCOP project. Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS)
  2. G Agosta et al. V2I Cooperation for Traffic Management with SafeCop. Digital System Design (DSD), 2016 Euromicro Conference on, 621-627

Past projects


2PARMA


The current trend in computing architectures is to replace complex superscalar architectures with small homogeneous processing units connected by an on-chip network. This trend is mostly dictated by inherent silicon technology frontiers, which are getting as closer as the process densities levels increase. The number of cores to be integrated in a single chip is expected to rapidly increase in the coming years, moving from multi-core to many-core architectures. This trend will require a global rethinking of software and hardware design approaches.
This class of computing systems (Many-core Computing Fabric) promises to increase performance, scalability and flexibility if appropriate design and programming methodologies will be defined to exploit the high degree of parallelism exposed by the architecture. Other potential benefits of Many-core Computing Fabric include energy efficiency, improved silicon yield, and accounting for local process variations. To exploit these potential benefits, effective run-time power and resource management techniques are needed. With respect to conventional computing architectures, Many-core Computing Fabric offers some customisation capabilities to extend and/or configure at run-time the architectural template to address a variable workload.

1 1

The 2PARMA project aims at overcoming the lack of parallel programming models and run-time resource management techniques to exploit the features of many-core processor architectures. To this purpose, a proper Consortium has been set up to gather the required expertise in the areas of system/application software and computing architectures.
The 2PARMA project (Jan 2010 - Dec 2012) focuses on the definition of a parallel programming model combining component-based and single-instruction multiple-thread approaches, instruction set virtualisation based on portable bytecode, run-time resource management policies and mechanisms as well as design space exploration methodologies for Many-core Computing Fabrics.

Visit 2PARMA official website.

1 1

  1. C Silvano et a. 2parma: parallel paradigms and run-time management techniques for many-core architectures. IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 494-499, 2010
  2. C Silvano et al. Parallel programming and run-time resource management framework for many-core platforms: The 2parma approach. 6th Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC), 2011.
  3. G Ascheid et al. Parallel paradigms and run-time management techniques for many-core architectures. Interconnection Network Architecture on On-Chip. Multi-Chip Workshop, 2012
  4. C Silvano et al. Parallel paradigms and run-time management techniques for many-core architectures: The 2PARMA approach. 9th IEEE International Conference on Industrial Informatics (INDIN), 2011

COMPLEX


Rising heterogeneity and complexity of embedded systems results into gaps and defines challenges that a leading industry has to face, such as handling complexity of execution platforms and applications, uncertainty of platform selection and application to platform mapping, balancing between increasing power consumption, performance, and explicit application needs and meeting memory demands both in size and access times.
As a consequence, the primary objective of COMPLEX (Dec 2009 - Nov 2012) is to develop an innovative, highly efficient and productive design methodology and a holistic framework for iteratively exploring the design space of embedded hardware/software (HW/SW) systems.

1 1

The R&D activities to be performed in COMPLEX will target new modeling and specification methodologies by using software like MDA design entry for system design as well as the integration of HW and SW timing and power estimation in efficient virtual system simulation, and also multi-objective design-space exploration under consideration of run-time management for power and performance optimizations.
Distinguishing feature of the R&D approach of COMPLEX is that it unifies the development and integration of next-generation MDA design-entry with platform-based design, existing EDA techniques and tools for estimation and model generation for virtual system prototypes, and a multi-objective design-space exploration technique and tool. This enables a synergic approach to a holistic embedded HW/SW virtual system prototyping approach regardless of the target platform and application domain.

The COMPLEX design framework will be developed by research (OFFIS, Politecnico di Milano, Politecnico di Torino, University of Cantabria, IMEC), industry (STMicroelectronics, GMV and Thales) and EDA partners (ChipVision, Synopsys-Coware, Magilem, EDALab), ensuring its usability in realistic, industry-strength design flows and environments, thus allowing the industrial partners to take advantage of the new solutions during the course of the project and to apply the new tools for production purposes shortly after project end.
The COMPLEX technical objectives constitute a prerequisite for the commercial targets of the industrial partners, which are geared towards an improvement of their (and their customers) competitiveness in the world-wide market of electronic products and applications.

Visit COMPLEX official website.

1 1

  1. C Brandolese and W Fornaciari.Software Energy Optimization Through Fine-Grained Function-Level Voltage and Frequency Scaling. In International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS'2012). Tampere, Finland, October 2012.
  2. K Grüttner et al. COMPLEX - COdesign and power Management in PLatform-based design space EXploration. In Proc. of the 15th Euromicro Conference on Digital System Design (DSD'2012). Cesme-Izmir, Turkey, September 2012.
  3. P Bellasi, W Betz, L M Marchi and W Fornaciari. A Step Toward Exploiting Task Affinity in Multi-core Architectures to Improve Determinism of Real-time Applications. In International Conference on Real-Time and Embedded Systems (RTES'2010). Singapore, November 2010.
  4. C Brandolese and L Rucco. A Genetic Approach for WSN Lifetime Maximization through Dynamic Linking and Management. In Proc. of the 7th ACM workshop on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks (PE-WASUN'10). Bodrum, Turkey, pp. 99-100, 2010.
  5. P Bellasi et al. Queueing Network Models for Performance Evaluation of ZigBee-Based WSNs. In Alessandro Aldini, Marco Bernardo, Luciano Bononi and Vittorio Cortellessa (eds.). Computer Performance Engineering. Series LNCS, volume 6342, Springer Berlin/Heidelberg, pages 147-159, 201.
  6. P Bellasi et al. Constrained Power Management: Application to a Multimedia Mobile Platform. In Proceedings of the Conference on Design, Automation and Test in Europe (DATE'10). Dresden, Germany, pp. 989–992, 2010.
  7. C Brandolese, W Fornaciari and L Rucco. A Lightweight Mechanism for Dynamic Linking in Wireless Sensor Networks. In IEEE Latin America Symposium on Circuits and Systems (LASCAS'10). Foz do Iguazu, Paranà, Brazil, February 2010.
  8. C Brandolese, W Fornaciari and D P Scarpazza. Source-Level Energy Estimation and Optimization of Embedded Software. In IEEE Latin America Symposium on Circuits and Systems (LASCAS'10). Foz do Iguazu, Paranà, Brazil, February 2010.
  9. W Fornaciari and P Bellasi. Cross-Layer Constrained Power Management: Application to a Multimedia Mobile Platform. In IEEE Latin America Symposium on Circuits and Systems (LASCAS'10). Foz do Iguazu, Paranà, Brazil, February 2010.
  10. P Bellasi, W Fornaciari and D Siorpaes. A Hierarchical Distributed Control for Power and Performances Optimization of Embedded Systems. In Christian Müller-Schloer, Wolfgang Karl and Sami Yehia (eds.). Architecture of Computing Systems (ARCS'2010). Series LNCS, volume 5974, Springer Berlin/Heidelberg, pp. 37-48, 2010.
  11. P Bellasi et al. CPM: A Cross-Layer Framework to Efficiently Support Distributed Resources Management. In Workshop on Parallel Programming and Run-time Management for Many-core Architectures (PARMA'2010). Hannover, Germany, pp 293–298, February 2010
  12. P Bellasi, S Corbetta and W Fornaciari. Hierarchical Power Management. In Designing for Embedded Parallel Computing Platforms: Architectures, Design Tools, and Applications (DEPCP'2010). Dresden, Germany, March 2010.

CONTREX


Up to now, mission & safety critical services of SoS (Systems of Systems) have been running on dedicated and often custom designed HW/SW platforms. In the near future such systems will be accessible, connected with or executed on devices comprising off-the-shelf HW/SW components. Significant improvements have been achieved supporting the design of mixed-critical systems by developing predictable computing platforms and mechanisms for segregation between applications of different criticalities sharing computing resources. Such platforms enable techniques for the compositional certification of applications’ correctness, run-time properties and reliability. CONTREX (Oct 2013 - Nov 2016) will complement these important activities with an analysis and segregation along the extra-functional properties real-time, power, temperature and reliability.

1 1

These properties will be a major cost roadblocks when:
  1. scaling up the number of applications per platform and the number of cores per chip
  2. in battery powered devices or
  3. switching to smaller technology nodes
CONTREX will enable energy efficient and cost aware design through analysis and optimisation of real-time, power, temperature and reliability with regard to application demands at different criticality levels. To reinforce European leadership and industrial competiveness the CONTREX approach will be integrated into existing model-based design methods that can be customized for different application domains and target platforms. CONTREX will focus on the requirements derived from the automotive, aeronautics and telecommunications domain and evaluate its effectiveness and drive integration into existing standards for the design and certification based on three industrial demonstrators. Valuable feed-back to the industrial design practice, standards, and certification procedures is pursued.Up to now mission & safety critical services of SoS (Systems of Systems) have been running on dedicated and often custom designed HW/SW platforms. In the near future such systems will be accessible, connected with or executed on devices comprising off-the-shelf HW/SW. Our economic goal is to improve energy efficiency by 20 % and to reduce cost per system by 30 % due to a more efficient use of the computing platform.

View CONTREX official website.

1 1

  1. Ralph Görgen et al. CONTREX: Design of Embedded Mixed-Criticality CONTRol Systems under Consideration of EXtra-Functional Properties, 2016 Euromicro Conference on Digital System Design (DSD), Limassol, pp. 286-293, 2016.
  2. Carlo Brandolese, Luigi Rucco, William Fornaciari. An optimal model to partition the evolution of periodic tasks in wireless sensor networks. IEEE international symposium on a world of wireless mobile and multimedia networks. Sydney, Australia, June 2014
  3. Carlo Brandolese, Luigi Rucco, William Fornaciari. Optimal wakeups clustering for highly-efficient operation of WSNs periodic applications. IEEE international conference on information communication and embedded systems (ICICES), Chennai, Tamilnadu, India, February 2014

HARPA


Application requirements, power, and technological constraints are driving the architectural convergence of future processors towards heterogeneous many-cores. This development is confronted with variability challenges, mainly the susceptibility to time-dependent variations in silicon devices. Increasing guard-bands to battle variations is not scalable, due to the too large worst-case cost impact for technology nodes around 10 nm. The goal of HARPA (Sept 2013 - Aug 2016) was to enable next-generation embedded and high-performance heterogeneous many-cores to cost-effectively confront variations by providing Dependable-Performance: correct functionality and timing guarantees throughout the expected lifetime of a platform under thermal, power, and energy constraints.

1 1

The HARPA solution employed a cross-layer approach. A middleware implemented a control engine that steers software/hardware knobs based on information from strategically dispersed monitors. This engine relied on technology models to identify/exploit various types of platform slack - performance, power/energy, thermal, lifetime, and structural (hardware) - to restore timing guarantees and ensure the expected lifetime amidst time-dependent variations. Dependable-Performance is critical for embedded applications to provide timing correctness; for high-performance applications, it is paramount to ensure load balancing in parallel phases and fast execution of sequential phases. The lifetime requirement has ramifications on the manufacturing process cost and the number of field-returns. The HARPA novelty was in seeking synergies in techniques that had been considered virtually exclusively in the embedded or high-performance domains (worst-case guaranteed partly proactive techniques in embedded, and dynamic best-effort reactive techniques in high-performance). HARPA demonstrated the benefits of merging concepts from these two domains by evaluating key applications from both segments running on embedded and high-performance platforms.

View HARPA official website.

1 1

Professor:
  1. Prof William Fornaciari
Research Fellow:
  1. Dr Giuseppe Massari
PhD Student:
  1. Simone Libutti
  1. Davide Zoni et al. BlackOut: Enabling fine-grained power gating of buffers in Network-on-Chip routers. Journal of Parallel and Distributed Computing, January 2017.
  2. Davide Zoni and William Fornaciari. Modeling DVFS and Power-Gating Actuators for Cycle-Accurate NoC-Based Simulators. ACM Journal on Emerging Technologies in Computing Systems, Vol. 12, No. 3, Article 27, September 2015
  3. Davide Zoni, Federico Terraneo, and William Fornaciari. A DVFS Cycle Accurate Simulation Framework with Asynchronous NoC Design for Power-Performance Optimizations. J. Signal Process. Syst. 83, 3, pp. 357-371, June 2016.
  4. Patrick Bellasi, Giuseppe Massari, and William Fornaciari. Effective Runtime Resource Management Using Linux Control Groups with the BarbequeRTRM Framework. ACM Trans. Embed. Comput. Syst. 14, 2, Article 39, March 2015.

OPENMEDIAPLATFORM


OpenMediaPlatform (Jan 2008 - Dec 2009) aims at defining an open service infrastructure for media-rich end-user devices that will address software productivity and optimal service delivery challenges. OMP innovatively combines two main streams of modern software engineering:
  1. split compilation techniques that leverage component based SW engineering and provide binary portability (CLI standard) with no performance penalty
  2. standardized and open application programmers’ interfaces (API) for advanced multimedia stacks that support resource management and context awareness at component level
Together, these two technologies will deliver a computing tools infrastructure and a media infrastructure to ensure optimal deployment of dynamically composed media services that scale with device and network capabilities. The OMP approach will be demonstrated using scalable video coding on two different platforms.

Read more information on OpenMediaPlatform here.
  1. Tartara M., Campanoni S., Agosta G., Reghizzi S.C. Just-in-time compilation on ARM processors. 2009
  2. Campanoni S., Agosta G., Crespi Reghizzi S., Di Biagio A. A highly flexible, parallel virtual machine: Design and experience of ILDJIT. Software: Practice and Experience, 2010

SMECY


The mission of SMECY (Febr 2010 - Febr 2013) is to develop new programming technologies enabling the exploitation of many (100s) core architectures. The goal of this ARTEMIS project is to launch an ambitious European initiative to match initiatives in Asia and USA and to enable Europe to become the leader. View SMECY official website.
  1. G Agosta, M Cartron, A Miele. Fault tolerance. Smart Multicore Embedded Systems, pp. 81-101, 2013

TOISE


For the future European applications such as Smart Grids for electricity network, smart low energy controlled home appliance, environmental or infrastructure sensor networks, and, more generally, management of trusted components, more security over communication networks and wireless communications, a number of technologies need to be developed and put in place to make the solutions smarter and more secure.
TOISE proposes to address the secure tamper resistant solutions needed by the related embedded applications. Trusted Computing now in practise for the PC and workstation area provides a proven approach face to new attacks, by implementing a chain of authentication and integrity from the boot of the computing platform to the applications set up.

1 1

The objective of TOISE (Jan 2011 - Dec 2013) is to define, develop and validate trust hardware and firmware mechanisms applicable both to lightweight embedded devices and as security anchors within related embedded platforms.
The aim is to maintain Europe as a worldwide player in the field of efficient implementation of secure integrated devices to address the future European applications. A large initiative is proposed to align a common European position in the area. Several of TOISE partners are participating to related standardisation working groups, such TOISE will allow to develop and promote European solutions in non-yet harmonised bodies. TOISE brings together European and manufacturing based Semiconductors such STMicroelectronics and Numonyx and Systems actors such Eads, Gemalto, Helenic Aerospace, Proton and Thales, to develop safe and secure solutions. SME develop enabling blocks as IP : Secure IC and Magillem Design. SME contribute to apply the technology to the related targeted applications : AZCom and TST. Cea Leti as security evaluation center performs some security tests. Seven research labs from the participating countries develop the further enabling research.

View TOISE official website.

1 1

Sample Description