Home
Call for Papers
Important Dates
Keynote Speakers
Special Sessions
Paper Submission
Technical Program
Registration
Venue
Travel Information
Accommodation
Committees
Patrons
ELMAR History
|
KEYNOTE SPEAKERS
|
|
Industry/University Collaboration in Creating Advanced Multimedia Systems and Tools
Prof. Borko Furht, PhD
Florida Atlantic University
USA
more information |
|
|
Cloud-Based Media Processing for Improved Broadcasting Applications
Prof. Ebroul Izquierdo, PhD
Queen Mary, University of London
UK
more information |
Industry/University Collaboration in Creating Advanced Multimedia Systems and Tools
Prof. Borko Furht, PhD
Florida Atlantic University
USA
Abstract:
In this talk we will first introduce the NSF-sponsored Industry/University Cooperative Center for Advanced Knowledge Enablement at FAU, which presently has 35 industry members with about $5 million memberships. The Center is successfully building a bridge linking academia, industry, and government in a coordinated research initiative. We describe several applied multimedia research projects conducted within the Center including video and image mining for coastline security, 3D image reconstruction and segmentation of brain cells, augmented reality methods for hearing augmentation, automatic asset management in data centers, driver drowsiness detection system using image processing, and several others. All these projects are initiated by industry partners who are the members of the Center and who are interested to apply the obtained research results and create successful commercial products. The talk will complete with our prediction where the multimedia computing is heading in the next several years.
About the Keynote Speaker:
Borko Furht is a professor of the Department of Computer & Electrical Engineering and Computer Science at Florida Atlantic University (FAU) in Boca Raton, Florida. He is also Director of the NSF-sponsored Industry/University Cooperative Research Center (I/UCRC) for Advanced Knowledge Enablement at FAU. Before joining FAU, he was a vice president of research and a senior director of development at Modcomp (Ft. Lauderdale), a computer company of Daimler Benz, Germany; a professor at University of Miami in Coral Gables, Florida; and a senior researcher in the Institute Boris Kidric-Vinca, Yugoslavia. Professor Furht received a Ph.D. degree in electrical and computer engineering from the University of Belgrade. His current research is in multimedia systems, video coding and compression, 3D video and image systems, wireless multimedia, and Internet and cloud computing. He is presently Principal Investigator and Co-PI of several multiyear, multimillion-dollar projects, including NSF Fundamental Research program and NSF High-Performance Computing Center. He is the author of numerous books and articles in the areas of multimedia, computer architecture, real-time computing, and operating systems. He is also editor of two encyclopedias – Encyclopedia of Wireless and Mobile Communications, CRC Press, 2007, 2012 (2nd edition), and Encyclopedia of Multimedia (Springer, 2009). He is a Special Advisor for Technology and Innovations for the United Nation’s Global Millennium Development Foundation.
He is a founder and editor-in-chief of the Journal of Multimedia Tools and Applications (Springer), and recently Springer’s Journal of Big Data. He has received several technical and publishing awards, and has consulted for many high-tech companies including IBM, Hewlett-Packard, Xerox, General Electric, JPL, NASA, Honeywell, and RCA. He has also served as a consultant to various colleges and universities. He has given many invited talks, keynote lectures, seminars, and tutorials. He served on the Board of Directors of several high-tech companies.
Cloud-Based Media Processing for Improved Broadcasting Applications
Prof. Ebroul Izquierdo
Head of the Multimedia and Vision Group in the School of Electronic Engineering and Computer Science
Queen Mary, University of London
UK
Abstract:
Conventional broadcasting has been limited by the need of expensive proprietary facilities located in the same place. Furthermore, interactivity is still embryonic or non-existent in any advanced broadcasting application. Typically, media processing for the production of appealing TV services involves the management and transmission of Terabytes of data. Local networks are created to ingest, store and manipulate such data. This situation is restrictive and places high barriers for the creation of new media services.
The objective of graceful integration of available cloud Infrastructures, with processing capabilities at the network edge in the production value chain, faces several key challenges, which need to be overcome to facilitate its adoption and diffusion into value-added processes and creativity acceleration in the broadcasting sector. This is particularly true for applications and services involving interactive, synchronised multi-modal streaming and near real-time processing. Such applications hold the most demanding set of performance requirements. The aim is to offer full support for interactivity exploiting professional captured content and user generated content. Here, the convergence of ultra-high definition technology and smart social user generated content is originating a new generation of applications and services with interactivity at their heart. Furthermore, enabling media processing at the edge of the network will lead to new business models making interactive TV, media manipulation into a much faster growing industry. The foreseen opportunities range from high end studio production to individuals creating programme content from licensed data.
In this talk, selected techniques and approaches for media processing involving the symbiosis between remote computer resources (possible in the cloud) and distributed low computation device at the network edge will be presented and discussed. More specifically, seamless transmission optimised to available bandwidth resources in heterogeneous networks will be presented. This has been the target of cutting edge standards as AVC and HEVC. Moving forward, in Fog Media Coding (FMC), a single content element in a distributed network consists of scalable coded audiovisual content enriched with metadata and suitable intelligent decoders that cooperate to decode high quality content and interaction capabilities. This emerging paradigm for media coding and decoding will be also presented in the context of new broadcasting services. Interactivity and near real-time processing involving convergent ultra-high definition technology and mobile user generated content will be discussed as well.
About the Keynote Speaker:
Ebroul Izquierdo, PhD, MSc, CEng, FIET, SMIEEE, MBMVA, is Chair of Multimedia and Computer Vision and head of the Multimedia and Vision Group in the school of Electronic Engineering and Computer Science at Queen Mary, University of London. He has been a senior researcher at the Heinrich-Hertz Institute for Communication Technology (HHI)), Berlin, Germany, and the Department of Electronic Systems Engineering of the University of Essex.
Prof. Izquierdo is a Chartered Engineer, a Fellow member of The Institution of Engineering and Technology (IET), a senior member of the IEEE and a member of the British Machine Vision Association. He was a past chairman of the IET professional network on Information Engineering. He is a member of the Visual Signal Processing and Communications Technical Committee of the IEEE Circuits and Systems Society and member of the Multimedia Signal Processing technical committee of the IEEE.
Prof. Izquierdo is or has been associated editor of the IEEE Transactions on Circuits and Systems for Video Technology (from 2002 to 2010), the IEEE Transactions on Multimedia (from 2010 to date). He is member of the editorial board of the EURASIP Journal on Image and Video processing (from 2004 to date), the Journal of Multimedia Tools and Applications (2008 to 2014) and the Journal of Multimedia (2009-2014), the Journal of Computer Engineering International (2008 to date) and the Infocommunications Journal (2008 to 1015). He has been guest editor of the Elsevier journal Signal Processing: Image Communication, The EURASIP Journal on Applied Signal Processing and the IEE Proceedings on Vision, Image & Signal Processing.
Prof. Izquierdo has been member of the organizing committee of several conferences and workshops in the field of image and video processing including The IEEE International Conference on Image Processing (ICIP), The IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), The IEEE International Symposium on Circuits and Systems (ISCAS), The IEEE Visual Communications and Image Processing Conference (VCIP) and The IEEE International Conference on Multimedia & Expo (ICME). He has chaired special sessions and workshops in ICIP, ICASSP, ISCAS, VCIP and ICME. He has been the general chair of the European Workshop on Image Analysis for Multimedia Interactive Services, London 2003 and Seoul 2006, the European Workshop for the integration of Knowledge, Semantics and Content, London 2004 and 2005, the Mobile Multimedia Communications Conference MobiMedia, Algero 2006, the International Conference on Content Based Multimedia Indexing, London 2008, the IET Conference on Visual Information Engineering, Xian 2008 and the International Conference on Imaging for Crime Detection and Prevention, London 2015.
Prof. Izquierdo has been involved as principal investigator in many EU funded projects including, Race Panorama, Cost211, Cost292, FP5 SCHEMA, FP5 Sambits, FP6 MESH, FP6 Papyrus, FP6 RUSHES, FP7 PetaMedia, FP7 Sala+, FP7 SARACEN, FP7 NextMedia, FP7 Eternal, FP7 VideoSense, FP7 Advise, FP7 Cubrik, FP7 Ecopix. He has coordinated several other large cooperative projects including FP6 aceMedia, The Cost292 Action, FP5 BUSMAN, FP6 K-Space, FP7 3DLife, FP7 EMC2 and FP7 Reverie. He has been also involved in several UK funded projects including EPSRC VideoAnnotation, TSB Thira and EPSRC Prometheus, EPSRC VideoCoding.
Prof. Izquierdo has graduated over 30 PhD researchers. He holds several patents in the area of multimedia signal processing and has published over 500 technical papers including books and chapters in books.
|