• Schrift vergrößern
  • Standard-Schriftgröße
  • Schriftgröße verkleinern
Start Programm Eingeladene Vorträge

Invited Talks 2011

E-Mail Drucken PDF

The Exascale Challenge
S. Borkar, Intel Corporation, USA

Compute performance increased by orders of magnitude in the last few decades, made possible by continued technology scaling. The technology treadmill will continue and one would expect to reach Exascale level performance in about 10 years. However, it's the same Physics that helped you in the past will now pose some barriers-"Business as usual" will not be an option. This talk will discuss potential solutions in all disciplines, such as circuit design, test, architecture, system design, programming system, and resiliency to pave the road towards Exascale performance.

Shekhar Borkar is an Intel Fellow, an IEEE Fellow, director of Academic Programs and Research, and director of Exascale research in Intel Labs. He holds MSEE from University of Notre Dame and MSc in Physics from University of Bombay. His research interests are low power, high performance digital circuits.

Dependable Computing and Assessment of Dependability
J. Arlat, LAAS-CNRS, Frankreich

This talk will cover the main design and evaluation issues that are to be considered when developing dependable computer systems. In the first part we will briefly address the fault tolerance techniques (encompassing error detection, error recovery and fault masking) that can be used to cope with accidental faults (physical disturbances, software bugs, etc.) and to some extent, malicious faults (e.g., attacks, intrusions). The second part will cover the methods and techniques - both analytical and experimental - that can be used to objectively assess the level of dependability achieved. The trend of controlled experiments, from simple fault injection-based tests meant for evaluating specific fault-tolerant computer architecture towards the development of benchmarks aimed at comparing the dependability features of several computer systems, will also be illustrated by means of selected examples.

Jean Arlat is a Directeur de Recherche with CNRS, the French National Organization of Scientific Research, and a member of the research group “Dependable Computing and Fault-Tolerance” at LAAS-CNRS, Toulouse, France, group that he led from 2003 to 2007. From 2007 to 2010, he has coordinated the “Critical Information Systems” research Area at LAAS. He is currently Director by Interim of LAAS. His research interests include the architecting of safe and secure embedded computerized systems, and the dependability assessment of computer systems — using both analytical modeling and experimental approaches. He has authored or co-authored more than 150 publications in the domain of dependable and fault-tolerant computing, including 21 book sections. He is member of IEEE and ACM, and has chaired the IFIP WG 10.4 “Dependable Computing and Fault Tolerance” (1999-2005). He is also a member of IFIP WG 10.2 “Embedded Systems”. In 2007, he received the IFIP Silver Core.

Quality of Test – Fault Models and Test Methods
J. Rajski, Mentor Graphics Corporation, USA

The actual quality of manufacturing test is a result of the required product quality expected by the market that can be achieved in a given semiconductor technology with the currently available test methods at acceptable costs. As the quality requirements and semiconductor technology change, the test methods have to change accordingly. This paper discusses how recent as well as the soon expected to appear characteristics of semiconductors will change defect profiles and what changes are expected to happen in test methodology. The devices manufactured in the 30, 20 and 10 nm technologies will potentially be very large by today’s standards, they will also have new characteristics implied by things like process variability. The semiconductor industry has adopted cumulatively more and more sophisticated fault models that use timing as well as layout information. What other fault models will be required to provide a robust measure of quality of test? The presentation will review some of the most promising extensions in that area, including new emerging fault models and adaptive test techniques. Structural DFT was introduced to provide automation in test pattern generation and fault simulation. Test compression was invented, on top of scan, to reduce the cost of manufacturing test. What other technologies will be needed to address the issue of growing design sizes, increased process variability, and new defect mechanisms? What is the impact of 3D IC technology on test methods. The presentation will examine hybrid techniques that use test compression and logic BIST to achieve manufacturing test objectives as well as system reliability.

Janusz Rajski is a chief scientist and the director of engineering for the Silicon Test Solutions products group at Mentor Graphics. He has published more than 200 research papers in these areas and is co-inventor of 58 US patents. He is also the principal inventor of Embedded Deterministic Test (EDT™) technology used in the first commercial test compression product TestKompress?. He was co-recipient of the 1993 Best Paper Award for the paper on logic synthesis published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, co-recipient of the 1995 and 1998 Best Paper Awards at the IEEE VLSI Test Symposium, co-recipient of the 1999 and 2003 Honorable Mention Awards at the IEEE International Test Conference, co-recipient of the 2010 Best Paper Award at the IEEE European Test Symposium, co-recipient of the 2008 Best Paper Award at the Asian Test Symposium, and 2009 Best Paper Award at the VLSI Design, as well as co-recipient of the 2006 IEEE Circuits and Systems Society Donald O. Pederson Outstanding Paper Award recognizing the paper on embedded deterministic test published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. He served as Program Chair of the IEEE International Test Conference. In 2009 he received the Stephen Swerling Innovation Award from Mentor Graphics “for his breakthrough innovation, TestKompress, and his many contributions to revitalizing Mentor's DFT business to its current position as #1 test business in EDA”.

Fault Injection-Based Assessment of Software Techniques for Hardware Fault Tolerance
J. Karlsson, Chalmers University of Technology, Göteborg, Schweden

In this talk, I will present results from a series of fault injection experiments conducted to assess the effectiveness of software-based mechanisms for detecting and tolerating errors caused by transistor faults. Technology and voltage scaling are making integrated circuits increasingly vulnerable to transistor aging, process variations and ionizing particles. This has forced chip manufacturers to provide microprocessors and other integrated circuits with sophisticated mechanisms for error detection and fault-tolerance. However, since it is economically infeasible for a chip manufacturer to guarantee detection of all possible transistors faults, I believe that software-based error detection and fault tolerance techniques will play an increasingly important role in protecting future computer systems against the expected increase in hardware error rates. My presentation will cover different approaches for implementing time redundancy, control flow checking and run-time assertions in software. I will discuss how these techniques can be implemented at the machine code level, and at source code level using aspect-oriented programming. We have evaluated the error coverage for such implementations with respect to single bit-flip errors in CPU registers and main memory locations. I will show how the error coverage varies for the different implementations. For mechanisms implemented by aspect-oriented programming, I will show how compiler optimization affects error coverage. I will also discuss the validity of using single-bit errors for assessing the error coverage of software-based mechanisms for hardware fault tolerance.

Johan Karlsson holds the Saab Endowed Chair Professorship in Dependable and Robust Real-Time Systems at Chalmers University of Technology, Göteborg, Sweden. He is with the Department of Computer Science and Engineering, where he is a co-leader of the Dependable Real-Time Systems group. His research interests span design, verification and assessment of fault tolerant distributed and embedded real-time systems. His current research focuses on software-based fault tolerance, fault injection techniques, system diagnosis and redundancy management in distributed real-time systems.

Automating Software Tool Qualification for Design and Test of Safety-Critical Systems
V. Izosimov, Semcon AB

This talk will discuss problem of development of safety-critical systems and the level of trust that can be assigned to the testing and design tools. Safety standards, in particular ISO 26262, require qualification of software development and testing tools for development of the "Item". However, this qualification is a very time-consuming process with unclear guidelines, not always affordable and not always correct. In case the tool is eventually qualified, any changes in the tool have to be analyzed with the impact analysis on the subject of violation of safety goals. In case of potential violations, the tool has to be re-qualified with a potentially great effort. Thus, designers and testers are often given a choice either continue with the old "buggy" version of the tool or perform time-consuming re-qualification. This may often lead to the "buggy" and ineffective tool versions used for too long. Another problem with the development tools is that new and promising tools cannot be used for safety-critical system designs unless they have been "proven-in-use". This leads to another "Catch 22" and prevents new tools from entering the safety market. Automating software tool qualification, based on the guidelines in the safety standards and generally accepted safety practices, is a possible practical solution to these problems of software tool qualification for safety-critical applications. Automating software tool qualification can increase flexibility of the development process of safety-critical applications without violation of safety goals.

Viacheslav Izosimov (shortly Slava) is Systems Architect at the Embedded Intelligent Solutions (EIS) By Semcon AB corporation. He performs an advanced consultancy work in the area of safety-critical embedded systems, functional safety and reliability. In particular, he works with ISO 26262 and IEC 61508 standards. He is also a TÜV Certified Functional Safety Engineer. Viacheslav defended his PhD in Computer Systems at Linköping University (LiU) in 2009. His PhD thesis entitled "Scheduling and Optimization of Fault-Tolerant Distributed Embedded Systems" dealt with several aspects related to design optimization and scheduling of distributed embedded systems with fault tolerance against transient and intermittent faults. Dr. Viacheslav is author of more than 20 papers in the area of fault tolerance, design optimization and testing, and a co-recipient of the Best Paper Award at the Design, Automation and Test in Europe Conference (DATE 2005).

Erweiterte Testverfahren für Konsumerprodukte
C. Heer, Intel Mobile Communications

Mikroelektronische Schaltungen für Automobil- und Luftfahrtanwendungen unterliegen sehr hohen Qualitätsanforderungen. Daher werden hier spezifische Testverfahren eingesetzt, um insbesondere die Bausteine herauszufiltern, welche frühzeitig ausfallen könnten. Diese klassischen Ausfallmechanismen zu Begin der Nutzungsdauer („Infant Mortality“) werden vor allem bei Speicherkomponenten durch spezifische Umgebungsbedingungen (Temperatur, Spannung) gezielt ausgelöst. Weniger robuste Zellen und Komponenten zeigen dann charakteristische Fehlerbilder mit deren Hilfe diese identifiziert und die Bausteine dann selektiert werden können. Inzwischen führen in Konsumerprodukten (Mobiltelefone etc.) sogenannte Soft-Fails (temporäre Ausfallmechnismen) zu signifikanten Ausfallraten. Diese Ausfälle können im Produktionstest heute nur unzureichend identifiziert werden. Aber auch hier sind meistens die weniger robusten Zellen oder Komponenten betroffen oder anfällig. Daher könnten Testverfahren aus der Automobiltechnik in Zukunft auch Anwendungen im Konsumerbereich finden. In dem Vortrag werden kurz die grundlegenden Mechanismen der frühzeitigen Ausfälle erklärt und mit den Mechanismen der Soft-Fails verglichen. Anschließend werden die Testverfahren beschrieben und deren potenzielle Anwendung auf Konsumerprodukte aus technischer aber auch aus wirtschaftlicher Sicht diskutiert.

Christian Heer ist Division Vice President Design System & IP in Intel Mobile Communications, München, Deutschland. Er leitet eine Organisation mit Standorten in Frankreich (Nizza), Indien (Bangalore), Österreich (Linz) und Deutschland (München, Duisburg), verantwortlich für das Design System inklusive alle EDA-Werkzeuge und Methoden, welche für die Entwicklung von System-on-Chip-Produkte bei Intel Mobile Communications genutzt werden (RTL-to-GDS Flow, Analog- und HF-Design, System-Level, Verifikation). IP umfasst alle digitalen Standardzellen-Bibliotheken, IO-Libraries und Memory-Compiler für Fertigungstechnologien von 180nm bis 28nm sowie die Entwicklung von High-Speed-PHY Komponenten (USB, DDR) und die extern lizenzierten Mikrocontroller, DSP und anwendungsspezifischen IP-Module. Nach dem Diplom in Festkörperelektronik an der RWTH Aachen im Jahr 1990 schloss er 1995 seine Dissertation in Ingenieurwissenschaften an der Universität Ulm ab. Er hat mehr als 50 Publikationen in internationalen Zeitschriften und auf Konferenzen veröffentlicht. Dr. Heer war Mitglied des technischen Programm- und Organisationskomitee von mehreren Fachkonferenzen (Design Automation Conference (DAC), Design-und Test-Europe (DATE), IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC)).

Design Zuverlässiger Systeme: Was kann auf welcher Ebene getan werden?
R. Krämer, IHP GmbH, Frankfurt

Im Vortrag wird anhand von Beispielen eine Systempartitionierung vorgenommen und die unterschiedlichen Aspekte der Zuverlässigkeitserhöhung von Systemen im Einzelnen behandelt: Zunächst werden auf der Hardwareebene unterschiedliche Ansätze zur Erhöhung der Zuverlässigkeit diskutiert die auf der Basis der Vermeidung von Störungen durch asynchrone Designmethoden und auf der automatischen Einfügung zusätzlicher Hardware-Komponenten zum Erreichen spezifischer, partieller Fehlerkorrektur dienen. Danach werden auf der Systemebene Ansätze diskutiert, die in verteilten Systemen beispielsweis in drahtlosen Sensornetzen explizites Redundanzmanagement ermöglichen. Weiterhin wird diskutiert inwieweit durch gezielte Maßnahmen drahtlose Kommunikationssysteme nachhaltig zuverlässiger gemacht werden können und dadurch neue Anwendungsfälle beispielsweise in der Car-2-car Kommunikation eröffnet werden können. Zum Schluss wird ein neuer Ansatz zur Einführung unterschiedlicher Operationsmoden von innovativen Multiprozessoren skizziert die insbesondere bei Einsatzfällen in der Luft- und Raufahrttechnik hohe Operationsflexibilität bei gleichzeitiger Verbesserung der Zuverlässigkeit erlauben. Alle Konzepte werden an konkreten Beispielen der Forschung im IHP erläutert das dabei als Interessanter Forschungspartner eingeführt wird.

Rolf Krämer wurde 1952 in Duisburg geboren. Nach dem Studium der Elektrotechnik und Technischen Informatik und Promotion an der RWTH Aachen arbeitete er seit 1985 in den Philips Laboratorien in Hamburg und Aachen in verschiedenen Arbeitsgebieten und Verantwortungsbereichen. Dabei entstanden zahlreiche Veröffentlichungen und Patente. Seit November 1998 ist er Leiter des Lehrstuhls Systeme an der BTU Cottbus, bei dem das Hauptaugenmerk auf verteilte Systeme und deren Management fällt. Zusätzlich leitet Herr Prof. Krämer die Abteilung Drahtlose Kommunikationssysteme am IHP in Frankfurt (Oder). Als Mitgründer der Firmen lesswire AG und Silicon Radar GmbH versucht er, die Forschungsergebnisse auch in wirtschaftliche Innovation umzusetzen. Seit 2009 ist Herr Krämer auch als Business Angel im Netzwerk Berlin-Brandenburg tätig.

Zuletzt aktualisiert am Mittwoch, den 22. Juni 2011 um 12:49 Uhr  


Das Tutorial "Hot Topics in Analog Design Automation for Yield and Reliability" findet leider nicht statt. Registrierungen für die Tagung können auch noch vor Ort erfolgen . . .