Shuji Hashimoto (Professor Emeritus and Research Advisor, Waseda University)

Shuji Hashimoto received the B.S., M.S. and Dr. Eng. Degrees in Applied Physics from Waseda University, Tokyo, Japan, in 1970, 1973, and 1977, respectively. He was an Associate Professor in the Department of Physics, Toho University from 1979. In 1991 he moved to Waseda University as a Professor of the Department of Applied Physics. In Waseda University he served as the Director of the Humanoid Robotics Institute for ten years from 2000. During 2006-2010 he was the Dean of Faculty of Science and Engineering. He was appointed and served as the Senior Executive Vice President for Academic Affairs and Provost of the University from 2010 to 2018. He has been one of the leaders of the Gundam Global Challenge since 2014. Currently he is a Professor Emeritus and Research Advisor of Waseda University. He joined XELA Robotics as the CEO in April, 2019. His research interests include Artificial Intelligence, Robotics, “KANSEI” Information Processing, Sound and Image Processing and Meta-Algorithm.

Lecture: Music in the AI era

    Many times, we were declared “Finally real artificial intelligence has been completed” and we were betrayed each time with various excuses. However, looking at the recent progress in AI technology, it seems that this time it might be true. Powerfully connected computers with big data seem to present adequate solutions to complicated problems that could not be solved ever before.
    Science and engineering have been an integral inseparable to form technology. Science organizes discovered knowledges and construct the theory to understand, while engineering presents means and methods that put the theory into practical use to provide solutions to real world problems. But presently the deep-learning-based AI produces solutions directly from a huge accumulation of raw data. It seems that science is blown off from the traditional picture of technology. The rest is engineering alone that delivers solution. At present, AI works well most but not all.  However, it does not tell us why the answer is correct. As many people complain, there is no proof of validity. The output of AI often sounds like God’s revelation. It is a black box we never know its inside. What we can do is only to believe in AI, saying that “because the computer is aware of all.”  With the recent rise of AI, traditional decent researchers, who accumulate appropriate processes based on theory and knowledge to approach the solution, seem to have been exiled from the main stage in many fields including music technology and science. 
  Science seems to be at stake in this way, but I am not pessimistic about the current situation. We need a science to understand things. We need engineering to make things. Science hates black box. While engineering often accept black box if it is useful. Useful tools accelerate science. AI is not yet in the final stage neither human intelligence is not. I believe we needs to start a new story of science together with a new tool AI. Music is fascinating field in elucidating human intelligence and creativity as it contains philosophy and arts, science and engineering, I would like to talk my story on Music in the AI Era.

Tadahiro Taniguchi (Professor, College of Information Science and Engineering, Ritsumeikan University)

Tadahiro Taniguchi received the ME and Ph.D. degrees from Kyoto University in 2003 and 2006, respectively. From April 2005 to March 2006, he was a Japan Society for the Promotion of Science (JSPS) Research Fellow (DC2) at the Department of Mechanical Engineering and Science, Graduate School of Engineering, Kyoto University. From April 2006 to March 2007, he was a JSPS Research Fellow (PD) at the same department. From April 2007 to March 2008, he was a JSPS Research Fellow at the Department of Systems Science, Graduate School of Informatics, Kyoto University. From April 2008 to March 2010, he was an Assistant Professor at the Department of Human and Computer Intelligence, Ritsumeikan University. From April 2010 to March 2017, he was an Associate Professor at the same department. From September 2015 to September 2016, he is a Visiting Associate Professor at the Department of Electrical and Electronic Engineering, Imperial College London. From April 2017, he has been a Professor at the Department of Information and Engineering, Ritsumeikan University. From April 2017, he has been a visiting general chief scientist, the Technology division of Panasonic, as well. He has been engaged in machine learning, emergent systems, intelligent vehicle, and symbol emergence in robotics.

Lecture: Generative Models for Symbol Emergence based on Real-World Sensory-motor Information and Communication
Music and language have structural similarities. Such structural similarity is often explained via generative processes. This invited lecture introduces the recent development of probabilistic generative models (PGMs) for language learning and symbol emergence in robotics. Symbol emergence in robotics aims to develop a robot that can adapt to the real-world environment, human linguistic communications, and acquire language from sensorimotor information alone (i.e., in an unsupervised manner). To this end, a series of PGMs, including ones for simultaneous phoneme and word discovery, lexical acquisition, object and spatial concept formation, and the emergence of a symbol system, have been developed. This lecture also introduces challenges related to integrating probabilistic generative models and the possible intersection between symbol emergence in robotics and computational music studies.

Gaëtan Hadjeres (Sony CSL Paris Music Team)

Gaëtan Hadjeres graduated from the École Polytechnique (France) and obtained a master in Pure Mathematics from Paris 6 University (Sorbonne Universités). He joined Sony CSL Paris in 2014 to do a Ph.D. thesis on music generation under the supervision of François Pachet and Frank Nielsen. In 2018, Gaëtan successfully defended his dissertation entitled “Interactive Deep Generative Models for Symbolic Music” and is now a permanent member of the Sony CSL Paris Music Team. Parallel to his scientific background, he studied music composition at the Conservatoire de Paris (CNSMDP) and he is also a pianist and a double bass player. His works (DeepBach, the Piano Inpainting Application) focus on the creation of A.I. tools able to assist musicians during composition, enrich their creative process and make music composition playful and accessible to a wide audience.

Lecture: Developing Artist-centric Technology
Important progress in generative modeling has been made over the last few years, allowing researchers to envision novel creative usages with impressive results. However, we can notice that such A.I. algorithms are often not easily accessible or controllable by an artist, so that their widespread adoption by content creators is yet to come. In this talk, I will present various examples of our modular approach at Sony CSL to bridge the gap between researchers and artists through the development of A.I. assistants. Setting the interaction with an artist as our core requirement brings up new interesting challenges and we hope it will help democratizing the latest advances in A.I. amongst musicians.