Thursday 27 October 2011

Typewriting Music





This is an attempt to explain what I find so fascinating about music trackers.

Trackers are widely known to make music look as exiting as an excel spreadsheet. Behind the mask they offer a unique workflow and creation process to digital audio creation. But this process is hard to describe to someone who never gathered any experience with trackers.

The tracker workflow is kinda between two different approaches. Most people know the direct approach: you hit record and play your instrument. You do this by leveraging your muscle memory, trained by practice. The advantage is the possibility of spontaneous improvisation and expression. A disadvantage is the danger of repetition because you always start playing what you have learned.
The other extreme is composing on a paper, or using your mouse on a piano roll or score editor. The advantage is that you think about what you create or want to create.
Trackers are in between because most of them can be controlled completely by computer keyboards. This way you can use muscle memory to move through your project and create events like notes. Most users don't do this in realtime, but more like a composer as described in the second example. This enables for a reflected workflow without thinking too much about the individual steps to get where you want to go. Tracker users describe this as being "in the flow".

I always dream of a DAW with the usual interface stuff we have today AND a tracker style interface... fortunately though there is rewire or AIC bus to connect trackers like Renoise to other DAW software :)

Tuesday 11 October 2011

THE GAME BOY SOUND - Republished







This article was originally published in one of my older blogs which does not exist anymore

THE GAME BOY SOUND

OK, I got to admit that I have let myself into a bigger project as I thought. I wanted to write review like articles about my favored Game Boy sound programs, but faced the problem that it does not make sense to write that without writing how the Game Boy actually makes his sound. So I stopped writing the review and will give you an overview about the basics here. The plan is to release a game boy article every month, to leave room for the iPhone, pocket PC and DS stuff. So stay tuned and here we go back in time!


Chapter 01: The Game Boy Soundchip or what makes Chiptune Chiptune.


Unlike modern computers, that create sounds mostly via software, old times computers created sounds with dedicated chips. These chips can be seen as more or less sophisticated synthesizers inside in an integrated circuit (IC). The best known one chip synthesizer is for sure the SID chip that came with the Commodore 64 and is very wide spread in pop music today by the likes as Timbaland, Zombie Nation, Welle:Erdball and many others. Just as the C64, the original Game Boy features a sound chip as well. But hey, the Game Boy would never have been that affordable if Nintendo had not integrated all parts as much as possible. Therefore the sound chip is part of the   main processor chip and you will never find the chip alone if you open up a Game Boy. The sound part of the Game Boy CPU is some times called PAPU (Pseudo Audio Processing Unit) and is very limited in it's possibilities compared to the SID. But that doesn't make it less interesting today, because limitation forces the creativity of musicians as well as programmers.

Take it appart:

Even though everybody refers to the game boy sound as 8 bit sound it actually is 4 bit sound. The misunderstanding comes from the fact that the main architecture is running with 8 bit, but the sound parts digital analogue converter (DAC) who turns the digital sound into electric signals is running with 4 bits. This is partially responsible for the raw character of the sound. So if you want to reproduce this you know that you have to adjust your bithifter to 4bit ;)
The Game Boy sound chip can be seen as a synthesizer without filters. It has four channels, that can be seen as oscillators.

  1. Pulse (Square) wave with volume envelope and “sweep”
  2. Pulse (Square) wave with volume envelope
  3. Waveform can play a sequence of 32 4 bit samples ( Yes, this is a 4 bit sampler!)
  4. Noise with volume envelope



These channels can be controlled by programs so that game programmers where able to create sound effects and music in the game boy games. As you can see all sound modules are missing filters, one of the challenges this chip offers for music creators. But what can the channels actually do? The pulse channels and the wave channel can create frequencies from 64hz (good bass) to 131072hz (Vampirebat/dolphin dance, humans can't hear that). The noise channel produces frequency mayhem between 2hz  and 1048576hz. But this is hard to control since it is nothing but white noise and therefor atonal. The sweep function of the first channel creates a piuuu that overwrites all other current values The pulse channels can also shift their pulse width, so if you use both channels together it is possible to create fat pulse bass sounds etc. But more about that later.
The waveform channel is very flexible as you can imagine. It can be used to create all kinds of low resolution waveforms from simple synth shapes like sawtooth to full sample playback.

The Control

That's about everything that can be said about the PAPU in general without going too deep into details of programming. If you are interested in this you will find a reference link at the end of the article. For us mortals it is more interesting what programs are already out there to harness the sound power of the game boy. There we will find two different paradigms. One is complete numeric control of every aspect of the sound chip, the other brings a nice graphical interface. And that will be the actual reviews you will read in the following months.

Links:

Programming reference:
http://www.devrs.com/gb/files/hosted/GBSOUND.txt

LSDJ wiki:
http://wiki.littlesounddj.com/GameboyResources?v=gf9

Wikipedia:
http://en.wikipedia.org/wiki/Game_Boy_Sound_System

http://en.wikipedia.org/wiki/Game_Boy

Thursday 6 October 2011

Tuesday 4 October 2011

The Future of Media Production


Now, this is some interesting development. Novacut, a Project financed by a Kickstarter campaign implements a webkit based user interface. 
Big deal, Apple did it with iOS and Google did it with Android, so are you about to tell us something new?
Hopefully yes: I have a dream, the dream is a software that provides raw functionality for media editing and generating. 

It needs a timeline, to arrange, syncronize, animate and automate in order to ultimately enable telling of stories,  
The timeline has to be able to sequence all kinds of data. 
The accuracy should be infinitely divisible (beyond tick/frame)

It needs absolute, raw data routing flexibility ( I mean that literally, I want to route a video to an audio channel and listen to the raw data stream, or maybe only the variations of red from the 3rd pixel)
In order to make use of this raw data routing requires intelligent interpretations of the media. This counts for copy/paste as well.

It needs a powerful database to quickly find stuff from your stock, let that be samples/loops, film clips, or a control lane. 
Alongside the database it needs a metadata miner using audio analysis and computer vision.

It shall be cross platform and support as many current extension standards as possible (VST, AU, FxPlug, Ladspa, dssi, directX etc…)

those features I am talking about should not be implemented into a GUI, but a server.
This server shall be accessible over different protocols to provide maximum flexibility how to interface functionality.

All functionality needs to be implemented modular, as plugin to the core server.
Plugins shall be available in central repository (lets not say app store :P ), available for installation if required.

All this will allow for maximum flexibility and scalability.

Some practical examples:

A http Interface: 
Open your browser, enter the servers address, use a HTML5, or better, a WebGL GUI and start producing.
Thanks to the abstraction of interface and function server the way you create can look however you want. 

Do you like music trackers like renoise? 
Connect with a terminal and off you go.

Running out of DSP power on one of your machines?
Add a second server as a slave.

Want to use a mac AU that is not available on your Linux workstation?
Start the server on the mac and slave it to your localhost

Do you like collaborative jamming or bouncing ideas?
Connect to the same server with multiple users.

User interfaces could be as simple as one record button or as complex as a modular patch bay with non interlocking tracks on different speeds in the timeline.

This would even be a way to move media production into the cloud.

Why should anyone desire this kind of inconsistent environment?

Well, this kind of framework would enable more people to build simple access to media production.
I was looking at whats happening with REAPER, Usine, Max for live or Renoise.
Decentralizing this kind of power, making it platform independent, open it up wider. That would be amazing wouldn't it?

I hope someone crazy enough to take that kind of project on has read this article.