Call for Papers: IEEE International Workshop on Delay-Sensitive Video Computing in the Cloud In conjunction with IEEE CloudCom 2015 November 30 - December 3 2015, Vancouver, Canada Video applications are now among the most widely used and a daily fact of life for the great majority of Internet users. In 2013, video data accounted for 78% of all Internet traffic in the USA and 66% of all Internet traffic in the world, which by 2018 will grow to 84% in the USA and 79% in the world. While presentational video services such as those provided by YouTube and NetFlix dominate this video data, conversational video services such as video conferencing, video gaming, telepresence, tele-learning, collaborative shared environments, and screencasting also have significant usage. At the same time, with the advent of both mobile networks and cloud computing, we are seeing a paradigm shift, where the computationally-intensive components of these conversational video services are moving to the cloud, and the end user’s mobile device is used as an interface to access the service. By doing so, even mobile devices without high-end graphical and computational capabilities can access a high fidelity application with high end graphics, because all the processing and rendering is done in the cloud, and the result is sent to the user via video, which any mobile device today can display. A practical example of this is Cloud Gaming, where the game events are processed and the game scene in rendered the cloud, with the resulting video streamed to the players. What distinguishes these cloud-based conversational video systems from other video systems is the fact that they are highly delay sensitive. While buffering and interruptions of even a few seconds are tolerated by users in presentational video applications, conversational video applications require a much tighter end-to-end delay, usually in the range of 150 to 250 milliseconds. Otherwise the conversational applications will “fail” since it is not responding to user interactions fast enough. There has been many recent proposals for cloud-based encoding of video, with the great majority focusing on video-on-demand applications, and mostly using the well-known Hadoop and Map/Reduce technologies to break the video into multiple chunks, encoding or transcoding each chunk on a worker node, resulting in parallel and therefore faster encoding of the video as a whole. But this will not work in a “live” video scenario, since we do not have sufficient time for such operations: the video must be processed live as it’s coming in and delivered to the user without violating delay thresholds. The fact that the cloud is used as a central node potentially adds a bottleneck and possibly further delays. Delay-sensitive processing and rendering of video in the cloud has therefore become an emerging area of interest. Topics of Interest: When running conversational video applications in the cloud, the cloud not only processes the application logic, but also the video rendering. The resulting video is then streamed to clients as video. But, there are challenges: First, video requires high bandwidth, especially if the scene must be sent to multiple users, such as video conferencing, cloud gaming, and telepresence. For example, Cloud Gaming requires a connection with no less than 5Mbps constant bandwidth per player to provide interactive gaming services with a resolution of 720p at 30fps. Second, conversational video is sensitive to network latencies that impair the interactive experience of the application. Third, the mobility of today’s users poses another set of challenges. Due to the heterogeneity of end users’ devices, the cloud has to adapt the video content to the characteristics and limitations of the client’s underlying network or end device. These include limitations in the available network bandwidth, or limitations in the client device’s processing power, memory, display size, battery life, or the user’s download limits as per his/her mobile subscription plan. While some of these restrictions are becoming less problematic due to rapid progress in mobile hardware technologies, battery life in particular and download limit to some extent are still problems that must be seriously considered. Also, consuming more bandwidth or computational power, even if available, means consuming more battery. In this workshop, we seek original papers that propose new approaches, methods, systems, and solutions that overcome the above shortcomings. Specifically, we seek papers in the following and similar topics:
Methods to speed up video encoding and video streaming at the cloud side
Live and real-time parallel video coding in the cloud Methods to decrease video bandwidth while maintaining visual quality Energy-efficient cloud computing for video rendering at the server side Efficient capturing, processing, and streaming of user interactions to the cloud, such as traditional, Kinectlike, Wii-like, gesture, touch, and similar mobile and touch-based user interactions Virtualization of large volume user inputs (e.g., depth sensor video) in the cloud Remote desktop, screen sharing, and Game as a Service (GaaS) Video-based telepresence, collaborative shared environments, and cloud gaming Optimizing cloud infrastructure and server distribution to efficiently support globally distributed and interacting users Resource allocation and load balancing in the cloud for optimized application support Network routing, software defined networking (SDN), virtualization, and on-demand dynamic control of the cloud infrastructure Adaptive video streaming according to network/user’s limitations Quality of Experience (QoE) studies and improvements for delay-sensitive video computing in the cloud: user-cloud and user-user interactions, effects of delay and visual quality limitations, and methods to improve them Novel architectures and designs based on cloud video rendering for video conferencing, video gaming, telepresence, tele-learning, collaborative shared environments, screencasting, and other conversational video applications and systems.
Submissions Paper submissions should be at most 6 pages long and must cover one of the above or similar topics. We especially encourage experience papers describing lessons learned from built systems, including working approaches, unexpected results, common abstractions, and metrics for evaluating and improving video systems. Please see IEEE CloudCom guidelines for further formatting and submission instructions. Important Dates Note: dates will be synchronized with IEEE CloudCom 2015 workshops. Submission Deadline: August 1 2015 Notification of Decision: September 1 2015 Camera Ready: September 15 2015 Workshop Date: November 30 or December 3 2015 (TBD) Program Chairs Shervin Shirmohammadi (
[email protected]), University of Ottawa, Ottawa, Canada Maha Abdallah (
[email protected]), Pierre & Marie Curie University (UPMC), Paris, France Dewan Tanvir Ahmed (
[email protected]), University of North Carolina at Charlotte, Charlotte, USA Kuan-Ta Chen (
[email protected]), Academia Sinica, Taiwan