Max is a powerful platform that accommodates and connects a wide variety of tools for sound, graphics, music and interactivity using a flexible patching and programming environment. Max allows most computer users to write a simple meaningful program within a few minutes, even with limited programming knowledge. But to do something more substantial it's necessary to approach Max as an actual programming language, by taking advantage of its various mechanisms for abstracting program elements into scalable, reusable components that can be combined in increasingly powerful ways.
This class will not cover every single capability of the language, but instead will focus on key concepts and mechanisms that will allow for tremendous new freedom and possibilities in Max. The class will touch upon:
• sound and movie playback
• sound synthesis
• sound and video effects processing
• algorithmic composition
• cross-modal mappings (e.g., video affecting audio and vice versa)
• interactive control (e.g., from QWERTY keyboard, mouse, USB devices, Open Sound Control)
Max programming, like most interesting topics, has deep aspects and shallow aspects. This course will largely focus on the deep aspects: principles, concepts, techniques, and theory. If you understand these underlying aspects, your capacity to create in Max will deepen exponentially.
At the same time, this is not just a theory class. You will also create your own projects using Max. This course will teach the minimum you need to start working on assignments, but mostly I will teach you how to learn or look up the shallow knowledge on your own using Max’s built-in documentation, the Internet, and the Kadenze course forum, as well as how to program your own tests that answer specific questions or reveal potential bugs. Working in this way, you will also develop essential skills and habits that will develop confidence and self-sufficiency, and serve you in the future.
Dr. Matthew Wright is a media systems designer, improvising composer/musician, and computer music researcher. He was the Musical Systems Designer at U.C. Berkeley's Center for New Music and Audio Technology (CNMAT) from 1993-2008, and is known for his promotion of the Sound Description Interchange Format (SDIF) and Open Sound Control (OSC) standards, as well as his work with real-time mapping of musical gestures to sound synthesis. His dissertation at Stanford's Center for Computer Research in Music and Acoustics (CCRMA) concerned computer modeling of the perception of musical rhythm: "The Shape of an Instant: Measuring and Modeling Perceptual Attack Time with Probability Density Functions." He spent one year as a visiting research fellow at the University of Victoria on the theme of "Computational Ethnomusicology" developing tools for analysis and visualization of detailed pitch and timing information from musical recordings. He was the Research Director of UC Santa Barbara's Center for Research in Electronic Arts and Technology (CREATE) for eight years, where he taught classes, advised students, founded and directed the CREATE Ensemble dedicated to research and musical creation with technology in a live performance context, as well as being Principal Development Engineer for the AlloSphere, a 3-story full-surround immersive audiovisual instrument for scientific and artistic research. As a musician, he plays a variety of traditional plucked lutes, Afro-Brazilian percussion, and computer-based instruments of his own design, in both traditional music contexts and experimental new works.
David Zicarelli is the founder and CEO of Cycling '74, a software company that maintains and develops the MAX graphical programming environment. The company introduced Max extensions for audio (MSP) in 1997 and video (Jitter) in 2001. Before starting Cycling '74, Zicarelli worked on Max and other interactive music software at Opcode Systems, Intelligent Music, and IRCAM, and earned a doctorate from the Stanford Program in Hearing and Speech Sciences.