This project explores the realm of sound synthesis and control within a modular interface, aiming to create generative music systems that offer flexibility, creativity, and efficient parameter control. The study focuses on two main sound sources: multichannel-based additive synthesis and granular synthesis. Additive synthesis, utilizing the sum of sine waves, provides a comprehensive sound source, while granular synthesis involves dividing complex waveforms into small grains for sonic manipulation. The modular interface employs optimized and fast multichannel (MC) objects, enabling efficient additive processes and the generation of harmonic series. To maintain smooth transitions between grains and prevent clipping, a buffer~ approach is employed, dividing waveforms into grains and applying Gaussian envelopes to ensure optimal playback. Additionally, a hybrid control mechanism is adopted, allowing for both collective and individual parameter control. The system's parameters are designed as "modes," representing the simultaneous movement of all parameters. To facilitate mode transitions, the concept of interpolation is introduced, with MPLRegressor~ from the Fluid Corpus Manipulation Project serving as a neural network-based tool for predicting parameter values based on assigned coordinates. The study highlights the importance of mapping parameters and controlling the interface through a pitchslider and 24 parameter values. Pinpointing specific modes within the system enables optimized interpolation between data sets, resulting in smooth transitions between modes. Through this research, the modular interface offers a powerful platform for sound synthesis and control, empowering musicians and artists to create dynamic and immersive sonic atmospheres. Overall, this project contributes to the field of sound synthesis and control by presenting a hybrid approach that combines additive and granular synthesis techniques within a modular interface. The integration of optimized multichannel objects, buffer~ approaches, and interpolation mechanisms enhances the creative potential and efficiency of the system, providing a flexible and intuitive platform for generative music production.
You can discover the atmosphere which this maxMSP patch provided. It works with several sound sources, modulators and also machine learning tools which developed by FluCoMa team. You can find detailed information about the sounds and the notion of project from the paper added below. You can freely download and check the idea in it. Do not forget to citation issues in case you use in your papers. Thanks.