2004 Invitational Workshop on the Future of Virtual Execution Environments       

links

2004 Invitational Workshop on the Future of Virtual Execution Environments - overview


Invitational Workshop on the Future of Virtual Execution Environments

hosted by IBM Research

IBM Learning Center
Armonk, New York

September 15-17, 2004

For three days in September, 2004, a few dozen experts in virtual execution environments gathered to discuss the future of this important area. The group included members of industry, academia, and open source projects. This site documents these meetings. It includes the workshop objectives, a list of attendees, the agenda, abstracts, slides, and video for most talks.


Organizing Committee

  • Bob Blainey, Distinguished Engineer and CTO Java
  • Paul Buck, Director of Java Strategy
  • John Duimovich, Distinguished Engineer and CTO Virtual Machines
  • Michael Hind, Manager, Dynamic Optimization Group
  • Jim Rymarczyk, IBM Fellow, Systems and Technology Group
  • Vivek Sarkar, Senior Manager, Programming Technologies, IBM Research
  • Pat Selinger, IBM Fellow and VP Data Management Architecture & Technology
  • Kevin Stoodley, IBM Fellow and CTO Compilation Technology
  • Mark Wegman, CTO Software Technology Research

Virtual execution environments (VEEs), otherwise known as managed runtimes, abstract machines, or virtual machines, have been used in support of portable (usually interpreted) programming environments for over 25 years. As the available computing power and memory have increased exponentially, VEEs have become a popular way of supporting computing environments as diverse as mobile phones, desktop computers, servers and supercomputers. Users of VEEs have reaped many benefits, including vastly improved code portability, transparent support for persistence and remoting, support for dynamic and flexible object-oriented programming models, advanced tools utilizing rich program meta-information, automated memory management, platform-independent models for multithreaded programming, and exploitation of unique performance opportunities due to dynamic optimization and compilation.

However, VEEs have also presented problems for users, such as inadequate interoperability between different programming languages and poor performance. The result has been a proliferation of VEEs that provide first class support for only a single or a small set of related programming languages and which are often provided for a single or small number of hardware or operating system platforms. Other problems include widely varying VEE capabilities and performance, expensive and coarse-grain interaction between VEEs, awkward and slow interaction between VEEs and native (e.g. C or C++) environments, poor sharing of resources between VEEs and native environments, and outstanding security and digital rights management vulnerabilities.

We see these as emerging industry problems that require broad cooperation and creativity among producers and consumers of VEEs in order to create solutions that provide value and flexibility to VEE-based software developers and users, while enabling an even playing field for competition. We also believe that perspectives and cooperation from varying levels of virtualization, from operating systems to scripting languages, will be needed to solve these problems.

The Future of VEEs workshop is an event that brings together selected researchers and practitioners from academia, open source projects, and leading corporations. The workshop is an opportunity for participants to share their work on virtual runtimes, their opinions on the strengths and failings of past and current virtual runtimes, and their vision for the future.



Tuesday, September 14

Registration information available at the IBM Learning Center check-in desk

7:00 pm - 8:30 pm	Dinner available for those arriving in time, main dining hall

8:30 pm - 10:30 pm	Reception in fireplace lounge, snacks and beverages available

Wednesday, September 15

7:00 am - 8:30 am	Breakfast in main dining hall

8:30 am			Welcome and agenda review
			Michael Hind, IBM

8:45 am			Round the table introductions, or One minute madness!

9:15 am			Virtual Execution Environments:  Challenges and Opportunities
			Bob Blainey, IBM

9:45 am			Break

10:00 am - 11:00 am	Session 1
			Chair: Ben Zorn, Microsoft

10:00 am		A Case for Virtual Instruction Set Computers
			Vikram Adve, University of Illinois at Urbana-Champaign

10:30 am		Costs and Benefits of Non-conformity
			Hans Boehm, Hewlett-Packard

11:00 am		Break

11:15 am - 12:15 am	Session 2
			Chair: Michael Franz

11:15 am		Python Implementation Strategies
			Jeremy Hylton, Python

11:45 am		Dynamic Optimization Myths
			Michael Hind, IBM		

12:15 pm - 1:15 pm	Lunch in main dining hall

1:15 pm - 3:00 pm	Walk around the grounds of the Learning Center

3:00 pm - 4:30 pm	Session 3
			Chair: Vikram Adve, UIUC

3:00 pm			Thoughts on the Future of Runtime Systems
			Ben Zorn, Microsoft

3:30 pm			Parley:  Federated Virtual Machines
			David Grove, IBM

4:00 pm			Mono Past and Future
			Paolo Molaro, Mono

4:30 pm			Break

4:45 pm - 5:45 pm	Session 4
			Chair: Vivek Sarkar, IBM

4:45 pm			Concurrency:  Where to draw the lines?
			Doug Lea, SUNY Oswego

5:15 pm			On the Need for Data Management Primitives in a VEE
			Jim Kleewein, IBM

5:45 pm			Adjourn

7:00 pm			Dinner in main dining hall

9:00 pm			BOF or free time

Thursday, September 16

7:00 am - 8:30 am	Breakfast in main dining hall

8:30 am			Day 1 recap and group discussion
			Day 2 agenda 
			Michael Hind, IBM

9:00 am - 10:30 am	Session 5
			Chair: John Duimovich, IBM

9:00 am			Virtual Machines:  Past and Future
			Bob Vandette, Sun Microsystems

9:30 am			Assuring Software Protection in Virtual Machines
			Andrew Appel, Princeton

10:00 am		Vertical Performance and Environment Monitoring for Continuous Program Optimization
			Evelyn Duesterwald, IBM

10:30 am		Break

10:45 am - 12:15 am	Session 6
			Chair: David Bacon, IBM

10:45 am		Late Binding and Dynamic Implementation
			Ian Piumarta, HP

11:15 am		Requirements and Issues of VXEs for Mobile Terminals
			Kari Systa, Nokia

11:45 am		Experiences in Using Virtual Machines for Standard Application Development
			Christoph Rohland, SAP

12:15 pm - 1:15 pm	Lunch

1:15 pm - 2:15 pm	Session 7
			Chair: David Chase, Sun Microsystems

1:15 pm			Future of JRockit and Tools
			Joakim Dahlstedt, BEA

1:45 pm			Dynamic, Data-driven Applications Systems
			Frederica Darema

2:15 pm			Break

2:30 pm - 4:00 pm	Session 8
			Chair: Hans Boehm, Hewlett-Packard

2:30 pm			Virtual Machine Monitors:  The Original Virtual	Execution Environments
			Mendel Rosenblum, Stanford University

3:00 pm			Hardware Support for Scalable Java Virtual Machines
			Cliff Click, Azul Systems

3:30 pm			Modularity, Hardware-based Profiling and Mixed ISA 
			Execution within Managed Runtimes
			Suresh Srinivas, Intel

4:00 pm			Break

4:15 pm - 5:45 pm	Session 9
			Chair: Doug Lea, SUNY Oswego

4:15 pm			Language and Virtual Machine Challenges for Large-scale Parallel Systems
			Vivek Sarkar, IBM

4:45 pm			The PyPy Approach Toward Building Virtual Machines
			Armin Rigo, PyPy

5:15 pm			The Usefulness of Unsafe Extensions
			David Chase, Sun Microsystems

5:45 pm			Adjourn

7:00 pm			BBQ dinner (on the patio, weather permitting)

9:00 pm			BOF or free time

Friday, September 17

7:00 am - 8:30 am	Breakfast in main dining hall

8:30 am			Day 2 recap and group discussion
			Day 3 agenda 
			Michael Hind, IBM

9:00 am - 10:00 am	Session 10
			Chair: Evelyn Duesterwald, IBM

9:00 am			Mozilla's needs from a VM
			Brendan Eich, Mozilla

9:30 am			Garbage Collection for Real-time Systems
			David Bacon, IBM

10:00 am		Break

10:15 am - 11:45 	Session 11
			Chair: Cliff Click, Azul Systems

10:15 am		VEE:  Verify Everything, Everytime
			Michael Franz, UC Irvine 

10:45 am		Virtual Machines for High-level Feature Support
			Dan Sugalski, Perl

11:15 am - 12:30 pm	Group Discussion, next steps and wrapup
			Bob Blainey, IBM

12:30 			Adjourn but lunch available

Below are the talk abstracts, slides, and streaming video for most talks and discussions. We recommended that while viewing the video, you also have the slides available.
Don't have time to listen to all the presentations? You may want to listen to the workshop motivation or the summary discussion.

  • Vikram Adve, A Case for Virtual Instruction Set Computers
    In this talk, I will try to make a case for a new organization of the hardware/software interface, which we refer to as Virtual Instruction Set Computers, or VISC. This organization hides the native hardware instruction set from all software and replaces it with a richer, persistent instruction set representation for software. Such a change benefits architectures, compilers, operating systems, and virtual machines; this talk will focus on the software implications. Compilers benefit because a rich, persistent representation allows sophisticated compiler technology to be applied to all parts of a system collectively (crossing traditional boundaries such as application/OS, application/VM, and VM/OS), and at all points in a software lifetime (including link-time, install-time, runtime, and ``idle-time'' between program runs). The LLVM compiler infrastructure embodies such a compiler system. Operating systems could benefit through greater portability, better architectural mechanisms, and better compiler-based enforcement of memory safety. Virtual machines (and compilers) could benefit because most code generation and optimization tasks can be moved out of individual virtual machines (and compilers) and into a common, language-independent translation layer. Overall, the VISC framework simplifies and raises the hardware/software interface, with broad implications for many key aspects of hardware design and system software.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

    Relevant papers LLVM LLVA

  • Andrew Appel, Assuring Software Protection in Virtual Machines
    Many virtual machines, including Java and .Net, use type-checking as a software protection mechanism. Can this be as secure as hardware protection via virtual memory for running untrusted applications? Perhaps. I'll outline an approach to building secure virtual machines using type-preserving compilation and formal verification.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • David F. Bacon, Realtime Garbage Collection
    Slides, Video: Part 1, Part 2, Part 3 + Discussion


  • Slides, Video: Part 1, Part 2, Part 3 + Discussion, part 1, Discussion, part 2

    Wrapup and Discussion
    Slides and video

  • Hans Boehm, Costs and benefits of nonconformity
    Execution environments for Java or CLI normally use runtime representations and conventions that differ substantially from those used by the platform standard ABI for languages like C and C++. Based on our experience with conservative garbage collection and gcj, which mostly conforms to a standard C++ ABI, we explore the tradeoffs involved in those differences.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion


  • Unsafe extensions aren't just peek and poke. A good set of unsafe extensions can simplify a VM's interface to native code and OS services, while still retaining much of the checking that comes with programming in safe languages. "Good" unsafe extensions include additional types, operations, calling conventions, and method annotations.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion


  • There are many roadblocks to usefully scaling a JVM to very large numbers including GC pause times for multi-gigabyte heaps, memory bandwidth and latency issues and lock contention. In this talk, Cliff will discuss how Azul is building systems designed to support very large JVMs; these systems are optimized to run 100s of concurrently executing threads and 100s of gigabytes of heap. Azul hardware includes read & write barriers allowing a high-performance, parallel, concurrent and relocating GC. Specialized CPU instructions reduce cache pressure from object allocations and enable stack-based allocation. In addition, we have support for a form of speculative locking. Azul is currently testing systems in the lab and plans to enter the market in the first half of 2005.
    Video: Part 1, Part 2, Part 3 + Discussion
    for slides contact the author.

  • JRockit is BEA’s JVM focused on enterprise applications. My presentation will focus on the future development directions of JRockit; in short, we are continually focused on creating a faster, more easy to use, JVM, that adaptively self-tunes itself by analyzing the application. In addition, I will talk about some of the tools that we are planning to add that expose this analysis data for a Java programmer (profiling, memory leaks etc.).
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • , Dynamic Data Driven Applications Systems
    This talk will discuss the capabilities, research challenges and opportunities to enable Dynamic Data Driven Application Systems (DDDAS). DDDAS entails the ability to incorporate additional data into an executing application (these data can be archival or collected on-line), and reversely applications will be able to dynamically steer the measurement process. Such capabilities offer the promise of augmenting the analysis and prediction capabilities of application simulations and the effectiveness of measurement systems, with a potential major impact in many science and engineering application areas. Enabling DDDAS requires advances in the application modeling methods and interfaces, in algorithms tolerant to perturbations of dynamic data injection and steering, and in systems software to support the dynamic environments of concern here, and will impact the kind of Cyberinfrastructure support that needs to be provided. Research and development of such technologies requires synergistic multidisciplinary collaboration in the applications, algorithms, software systems, and measurement systems areas, and involving researchers in basic sciences, engineering, and computer sciences. The talk will address specifics on such technology challenges, and provide examples from funded projects.


  • Advances in software and hardware technologies and recent trends towards virtualization and standardization are rapidly adding to the complexity of the execution stack. As a result, performance tuning is turning into an increasingly challenging task for developers. Complex interactions among execution layers need to be understood in order to properly diagnose and eliminate performance bottlenecks. This talk presents a software architecture for continuous program optimization (CPO) to assist in and automate the challenging task of performance tuning a system. A core component of CPO is an infrastructure for performance and environment monitoring (PEM) that vertically integrates performance events from all execution layers to provide the necessary foundation for detecting, diagnosing, and eliminating performance problems. We designed and implemented a PEM prototype that feeds the vertical event stream to a performance visualizer, our first PEM client. This talk describes the CPO architecture, how PEM interacts with CPO, an experiment using the PEM visualization client to understand data gathered across multiple layers of the system, and how that data was used to positively affect system performance.
    Slides, Video: Part 1, Part 2, Part 3, Discussion

  • Brendan Eich, Mozilla's products and platform needs from a VEE.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion
  • Michael Franz, VEE - Verify Everything, Everytime
    Despite numerous "trusted systems'' initiatives, current systems software remains riddled with errors. The recent "Slammer'' incident shows that an adversary exploiting just one such error (a "buffer overrun'') can cause tens of thousands of hosts to fail around the world in just minutes. Even more surprisingly, for this particular vulnerability there had been a patch available more than six months earlier, and yet even many of the OS manufacturer's (Microsoft's) own computers succumbed to the attack. Two facts are becoming increasingly evident: First, operating systems are becoming so large and evolving so quickly that it is getting prohibitively expensive to manually inspect every line of code for the absence of errors. Second, the current approach of "patching'' errors as they are discovered is failing us: on one hand, we now have attacks that can spread worldwide in minutes. On the other hand, if not even Microsoft can keep its own computers current, then how can one expect that other organizations will apply all patches immediately as they are released, and in the correct order?

    Worse yet, critical software is increasingly developed outside of the United States and/or using "community-based open-source'' processes. Some of the developers might actually be agents of foreign nation states. Although one of the tenets of the open-source movement is that the resulting code is safer because users can inspect programs for the possible existence of back-doors, this is mostly wishful thinking. It is virtually impossible to manually audit millions of lines of code. On the other hand, most open source projects have virtually no audit controls, so that it is impossible to attribute individual code fragments to the programmers who inserted them. As a consequence, we may be basing trustworthy application programs on completely *un*trustworthy operating-system foundations. I believe the solution lies in "verifying everything". By applying recent mobile-code research results, we believe it is now possible to build a complete system in which *all of the operating system code* is verified prior to every execution---and we propose to build such a system. The only thing that would need to be trusted in such a system is a minimal safe-code platform core (encompassing a verifier and a small dynamic code generator), which would be small enough to make it feasible to manually verify it line by line, using techniques appropriate for mission-critical software, such as fly-by-wire control systems. The core could then be sealed along with the processing unit into a tamper-proof hardware implement. Everything above this layer would be verified, i.e., even the code in the root directory of the local hard drive would no longer need to be trusted.

    In this talk, I will address our first steps with the goal of creating such a system. The challenges are plentiful, ranging from the performance of the underlying platform to the performance of the verification mechanism. As I will illustrate as a sideline, the approach also opens up new vulnerabilities, namely complexity-based denial of service attacks.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • David Grove, Parley: Federated Virtual Machines
    The CLR embodies one approach to cross-language integration: many languages can be compiled into a common type system, bytecode format, etc. and be executed in a single virtual machine. An alternative approach is to define a lightweight inter-virtual machine interop layer that allows multiple virtual machines to interoperate within a single process/address space. We are exploring this alternative approach to cross-language (and cross VM) integration as part of the Parley research project. In this talk I will motivate why this approach is potentially attractive, discuss some of the technical details of the interop layer and the constraints it implies on participating virtual machines, and report on our initial (very preliminary) experiences with implementing it.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • Michael Hind, Debunking Dynamic Optimization Myths
    Programming languages that are executed by virtual machines face significant performance challenges beyond those confronted by traditional languages. First, portable program representations and dynamic language features, force the deferral of most optimizations until runtime, inducing runtime optimization overhead. Second, modular program representations preclude many forms of whole-program interprocedural optimization. Third, virtual machines incur additional costs for runtime services such as security guarantees and automatic memory management. To address these challenges, mainstream virtual machine implementations include substantial infrastructure for online profiling, dynamic compilation, and feedback-directed optimization. This talk will survey the state-of-the-art in the areas of dynamic compilation and adaptive optimization in virtual machines by debunking several misconceptions about these two topics.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • Jeremy Hylton, Python implementation strategies
    My plan is to compare how Python is compiled for three VMs -- the Python VM, JVM, and .NET -- and discuss how the Python VM is quite different from the other two. I'll try to make some suggestions about Python's requirements for a future VEE.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion


  • At a very gross level applications running in a VEE can be classifed into two large categories, those that operate without access to any persistent state (for example the classic payment amortization calculator), and those that require access to some persisted state. This persisted state is often, but not always, stored in a relational database system and access, processed, manipulated, and disaplayed using set oriented operations. A 21st century VEE should include basic database access primatives as well as basic set based processing primitives. This talk will explore some of the basic database and set based operations that should be considered for a VEE.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • Doug Lea, Concurrency: where to draw the lines
    Concurrency support requires a set of design choices about what functionality to provide and how to structure it at the chip, system, OS, VM, library, and application levels. This talk will discuss experiences with some of these decisions and trade-offs as Java has evolved to support more interesting and stable concurrency properties, and some guesses about future directions.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion


  • The motivations for the development of the Mono VM will be explained: why we choose the CLR as a foundation, what are our objectives in the short and long run. The current state of the Mono VM will be presented, with some details of the JIT implementation, compatibility issues. Next we'll discuss how we plan to improve Mono in the next couple of years, what changes we anticipate will happen in the developer community regarding virtual machines and how Mono will support the changes.
    Slides, Video: Part 1, Part 2, Part 3 + Initial Discussion, Further Discussion
    Due to recording complications, some of the discussion between Part 3 and Part 4 was not recording.

  • Slides, Video: Part 1 (started late), Part 2, Part 3 + Discussion (video not available)

  • Armin Rigo, The PyPy approach to virtual machines
    I will first explain how PyPy works, how it can target various virtual machines and possibly help to bridge them. I will focus on how the PyPy approach differs from the approach of building and using standardized virtual machines, and why I prefer the former "language-oriented" solution over the latter machine-oriented solution.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • The presentation will focus on the experiences and hurdles to develop an environment for standard application development on Java. I will draw comparisons to the ABAP environment which is used for this task very successfully. Further on I will elaborate on the fact that the Java VEE is marketed as a virtual machine: I will compare the Java environment with the facilities provided by a physical machine and an operating system. Overall my presentation will depict a users point of view on VEEs.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • Mendel Rosenblum, Virtual Machine Monitors - The Original Virtual Execution Environment
    Virtual Execution Environments (VEEs) are examples of the basic level-of-indirection trick of computer scientists. By adding a level of indirection you can solve any problem with some, hopefully modest, performance overhead. In this talk I will argue that such a level of indirection is needed between the hardware and the software in a modern computing environment. This layer, traditionally called a virtual machine monitor, is needed even as functionality moves to run in higher-level VEEs (managed code runtimes).
    Slides, Video: Part 1, Part 2, Part 3 + Discussion, Further Discussion

  • Vivek Sarkar, Language and Virtual Machine Challenges for Large-Scale Parallel Systems
    The benefits of virtual machine technologies and managed runtime environments are well recognized in areas such as safety, portability, interoperability, and virtualization. However, the areas of large-scale parallel systems and high-performance computing offer interesting new opportunities for exploiting VM and MRE technologies, especially in conjunction with new languages. In this talk, we will outline the design decisions being made in the X10 language for large-scale parallel computing, the benefits that we expect to obtain by building a managed runtime environment for X10, and the challenges in VM and MRE technologies that will need to be addressed to effectively support X10 on future parallel computer systems with hierarchical heterogeneous levels of parallelism (e.g., cluster, SMP, multiple cores on a chip, non-coherent co-processors, SMT, vector, etc.) and large nonuniformities in data-access latency and bandwidth. The X10 effort is part of the IBM PERCS (Productive Easy-to-use Reliable Computing Systems) project, which is partially supported by the DARPA program on High Productivity Computing Systems (HPCS). X10 has been designed with input from David Bacon, Bob Blainey, Perry Cheng, Julian Dolby, Kemal Ebcioglu, Guang Gao (U Delaware), Allan Kielstra, Robert O'Callahan, Filip Pizlo (Purdue), V.T.Rajan, Lawrence Rauchwerger (Texas A&M), Vijay Saraswat, Mandana Vaziri, and Jan Vitek (Purdue).
    Slides, Video: Part 1, Part 2, Part 3 + Discussion


  • Intel Labs and Intel Product groups are working on a variety of Managed Runtime Areas on current and future Intel Architectures. This includes Managed Runtimes and Micro-Architectures, Influencing and Enabling ISV Managed Runtimes, Modularity & Optimized Libraries within Managed Runtimes. In this talk, we will cover Modular Runtimes & Dynamic Compilers, Hardware based Dynamic Profile Guided Optimization, and Mixing multiple ISA’s within the same Managed Runtimes. We will offer some of our perspectives on where we Managed Runtimes are headed.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion

  • Dan Sugalski, Handing Out New Toys
    VMs are a good way to make interesting, but otherwise overlooked or difficult to implement, theoretical constructs available to language designers.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion, Further Discussion


  • This talk will touch on the following issues: user interface - including scalability; resource management; operational management of the platform; hardware abstraction of various accelerators; robustness, reliability, security (vs. flexibility)
    Slides, Video: Part 1 (started late), Part 2, Part 3, Part 4 + Discussion


  • All computer system vendors know that one of the key success factors for the marketability of their products is a large and up to date collection of quality high performing applications. Bob Vandette, a senior developer with Sun Microsystems, has worked in the field of virtual machines for the past 20 years developing products which use this technology to enhance system vendors portfolios of applications. Bob will discuss the evolution of these products examining their past, present and future implementations and the changing challenges that the virtual machine developer has had to overcome through the years.
    Slides, Video: Part 1, Part 2, Part 3 + Discussion (video not available)

  • Ben Zorn, Thoughts on the Future of Runtime Systems
    Managed Runtime Environments (MREs) (aka virtual execution environments or simply runtime systems) have evolved in functionality and complexity for over 40 years. MREs, such as the JVM and CLI, have absorbed functionality once only available from the operating system, and at the same time MREs support diverse and highly dynamic application configurations. While current MREs are clearly effective for many applications, opportunities remain to improve their design and broaden their applicability. In my talk, I will focus on current issues with runtime systems and consider where hardware and software trends are likely to take these systems in the future. I consider MREs from the perspectives of performance, reliability, and ease of use, drawing on published experiences using the CLI for client applications. I will also suggest important design directions for future MREs, including thoughts on improving support for modularity, error handling, concurrency, and componentization. One of the important future challenges for MREs is to demonstrate that they are up to the task of implementing the lowest-level system software, a domain where they are needed. The Singularity Project, at Microsoft Research, is investigating this challenging problem.
    Slides, Video: Part 1, Part 2, Part 3, Part 4 + Discussion

  • Part 1, Part 2, Part 3

  • ,Workshop summary and discussion
    Slides, Video: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6


For feedback or to report any errors on this page, please send mail to Michael Hind.

 




Objectives

Invitational Workshop on the Future of Virtual Execution Environments

hosted by IBM Research

IBM Learning Center
Armonk, New York

September 15-17, 2004

Workshop Objective

Virtual execution environments (VEEs), otherwise known as managed runtimes, abstract machines, or virtual machines, have been used in support of portable (usually interpreted) programming environments for over 25 years. As the available computing power and memory have increased exponentially, VEEs have become a popular way of supporting computing environments as diverse as mobile phones, desktop computers, servers and supercomputers. Users of VEEs have reaped many benefits, including vastly improved code portability, transparent support for persistence and remoting, support for dynamic and flexible object-oriented programming models, advanced tools utilizing rich program meta-information, automated memory management, platform-independent models for multithreaded programming, and exploitation of unique performance opportunities due to dynamic optimization and compilation.

However, VEEs have also presented problems for users, such as inadequate interoperability between different programming languages and poor performance. The result has been a proliferation of VEEs that provide first class support for only a single or a small set of related programming languages and which are often provided for a single or small number of hardware or operating system platforms. Other problems include widely varying VEE capabilities and performance, expensive and coarse-grain interaction between VEEs, awkward and slow interaction between VEEs and native (e.g. C or C++) environments, poor sharing of resources between VEEs and native environments, and outstanding security and digital rights management vulnerabilities.

We see these as emerging industry problems that require broad cooperation and creativity among producers and consumers of VEEs in order to create solutions that provide value and flexibility to VEE-based software developers and users, while enabling an even playing field for competition. We also believe that perspectives and cooperation from varying levels of virtualization, from operating systems to scripting languages, will be needed to solve these problems.

The Future of VEEs workshop is an event that brings together selected researchers and practitioners from academia, open source projects, and leading corporations. The workshop is an opportunity for participants to share their work on virtual runtimes, their opinions on the strengths and failings of past and current virtual runtimes, and their vision for the future.