Tuesday, August 14, 2018

Commanding Laserdiscs

Coming soon...
For the past couple evenings, I've been poking at controlling video playback programmatically... I was inspired by Kevin Savetz' recent project to resurrect the Rollercoaster Apple ][ Laserdisc game.  He got it working, fixed it, enhanced it to work for the DVD version of the game, and was showing it off at KansasFest.

My first quick hack, as seen before on this blog, was to screen-scrape an Infocom game (Hitchhiker's Guide to the Galaxy), and based on text from that, force VLC to play back certain frames or tracks of video from the Hitchhiker's Guide TV show.

I've been poking at actually making Rollercoster be playable inside the browser for my next trick. I'm basing it all on the source code, which Kevin has made available to us, and using a HD transfer of the movie, which hopefully is the same edit of the film as the Discovision Laserdisc.

There will be more on this later, but the short version is that I'm using jsbasic to run Applesoft Basic right in the browser (no emulation of an Apple ][ ), simulating the Super Serial port (6551 ACIA chip), capturing the BASIC game's laserdisc commands being sent to said serial port, and then using that information to control a fakeo-laserdisc player, which is just an HTML 5 video player.  I'm still working through all of this.  The cool thing is that when I'm done with this, it could be a great test system for people wanting to make more LD-AppleII games.  As long as you use video files that can be properly frame-associated with real discs, it should be a drop-in kind of thing.

NOTE: if I learn more information, I'll note it in this post.  I'm really green when it comes to Apple II hardware, and really Apple II programming in general. This is really my first deep-dive into Apple II serial hardware and specifics of Applesoft BASIC!

Super Serial card

The Super Serial for this game is in slot 2, and is fitted with a 6551 ACIA serial interface chip. Writes to the ACIA get sent out immediately, they are not buffered.  The ACIA has one bit on its status port 0x08 or b00001000 that goes high when there's a byte to read.  The LDP will send a 'R' when its current task is complete.

The Super Serial card, being in slot 2 means that its four registers are at decimal locations:

  • 49320 - Data 
    • Read from here to read characters sent to the Apple
    • Write to this to send out characters to the LDP
  • 49321 - Status
    • Read from here to get error codes, 
    • Read and mask off 0x08 to get the "is a character ready to be read from the LDP?
  • 49322 - Command
  • 49323 - Control
    • These two are used to configure the port, for baud rate, parity, etc.  For this project, I'm simulating the behavior of the device at a very high level, so i basically ignore all of this stuff.
    • The real hardware runs at 4800 baud, 8-N-1
This means that to write a character (newline) out to the LDP:
  • POKE 49320,13

LDP Simulation

Anyway. there's been some interest in others wanting to simulate the Laserdisc player (LDP) using other methods (using VLC for example), so I thought I'd just do a quick post to explain my current understanding of the specific commands used by the game. You can look up specifics of the protocol in the LD-V4400 User's Manual, or in the DVD-V8000 User's Manual  All commands are two-letter sequences, uppercase (although the manual says that it accepts lowercase as well.) and can have a number BEFORE the opcode.

There are a lot of commands that these can handle, but these are the commands that we care about for this game:

Go into "frame mode".  This means that any addresses will be in number of frames from the start of the disc.  Frame 0 is the first frame on the disc, the final frame is dependent on the duration of the disc.  CAV discs are about 30 minutes of content. at 29.97FPS (NTSC), that is roughly 54,000 frames.

(address) SE
Seek to the address (frame number) and go into still-frame mode.  Reading from the ACIA, 

(address) PL
Play, and go to still-frame mode when the address (frame number) is reached.

Additionally, the game's command strings use '/' to indicate a carriage return (CR) character, which is sent as decimal value 13.  The player will then execute the command, and will send back an "R" character (decimal 82) when it completes.

Let's go through a couple real examples...

  • Switch the LDP into frame mode
  • Seek to frame 2818, and still-frame there
  • LDP will send "R" when it has completed.
AKA: Still frame on frame 2818, will send "R" when done.

  • Switch the LDP into frame mode
  • From the current position, play until the frame number reaches 2134, and still-frame there
  • LDP will send an "R" when it has completed
AKA: Play until frame 2134, will send "R" when it reaches there.

  • Switch the LDP into frame mode
  • Seek to frame 6726, and stillframe
  • LDP will send "R" when ready
  • (switch to frame mode)
  • Play until frame 6959, and then stillframe
  • LDP will send "R" when ready
AKA: Play from frame 6726 through 6959, send "R" when done.

Game implementations

The serial port configuration and setup is at line 31000.  The sequence it does is that it POKEs 11 to 49322, and 28 to 49323, which sets up the serial port configuration (baud, parity, etc).  It also does something with CHR$(4); "PR#2" or "PR#0".  I'm honestly not entirely sure what those are for, something with sending data to the serial card.

The BASIC code to send the command strings is at line 40000, and the command string is stored in VC$.  So in the program you'll see stuff like:
34000 VC$ = "FR2818SE/": GOSUB 40000 
34011 VC$ = "FR6726SE/FR6959PL/": GOSUB 40000
You already know what these do from the above examples!

The LDP communication code at 40000 is:
40020  FOR I = 1 TO  LEN (VC$)
40030  IF  MID$ (VC$,I,1) = "/" THEN  POKE 49320,13: WAIT 49321,8:J =  PEEK (49320): GOTO 40060
40040  POKE 49320, ASC ( MID$ (VC$,I,1))
40060  NEXT I
40070  RETURN
Or, in human readable terms:
  • if "DISC" is set 0, then there's no VDP, so just return
  • For each character of the command string:
    • if the character is a '/', then send the VDP a carriage return, and wait for "R" response
    • otherwise just send the current command string character to the VDP

The code also uses the "WAIT" command, which was not implemented in the javascript library I was using.  This command essentially blocks, and waits until the conditions are met.  In the above code it essentially is PEEKing at 49321 (ACIA status) for bit 8 to be set; for a character to be ready to read in.. It then reads the character to variable J, which is then ignored. ;)  But it knows the LDP is done with the task.

Friday, August 10, 2018

4 Hour Projects: Adding Video to Interactive Fiction


After listening to the Eaten By A Grue podcast:19 about the interactive laserdisc-based game for the movie "Rollercoaster", and then following that up with the episode about Hitchhiker's Guide:16, a spark popped in my head. It should be possible to somehow "watch" the Hitchhiker's Guide (H2G2) Infocom game and play clips of the TV show, movie, etc, syncronized to the scenes and what you're looking at.
All of the distributable code for this is available at the github page for the project.
Just about all of this project is based on existing stuff, but I glued it all together.
  • Frotz - runs the Z-code, in a text-based terminal
  • VLC media Player - plays back the video
  • H2G2 TV on DVD - the DVDs
  • Netcat - so my client can talk to the server
  • Tee - to split the output from frotz
  • Perl script (in this repo) - "reads" the text, matches text, tells VLC what to play
The whole thing kinda works, is buggy, unoptimized, but that's the nature of a 4 hour hack.


This is the basic system... This was a quick sketch I made to get the idea down. Essentially there are two halves. (I also called it "zvid" instead of "llifvid". pls ign. thx.)

Server/Video player

The server on the left is essentially a perl script that:
  • Matches text from the interactive fiction
  • has a simple interpreted language that can perform sequential functions
  • outputs "remote" commands for VLC
The output of that perl script is piped into VLC. If you're going to reproduce this, be aware that enabling the VLC shell it was a bit tricky, and I didn't document the process. I think I needed to enable extended preferences in VLC to see the bit to turn on that feature.
Also on this side are the video files for H2G2 episodes 1 and 2. I used Handbrake to decode them off of my DVDs. I do know that the episodes exist online, but those may be set up for web streaming, and will need to be transcoded via handbrake or some other tool. It basically just needs to add some information in that isn't there.
Netcat (nc) is setup as a listener for the text output from the game engine. That is piped into the perl script, whose output is piped into VLC.

Client/Game engine

The client on the right is some bash script piping and connection of Frotz with the game file. I used the ms-dos version of the data file, although I do not know if any other versions differ in any way. This was just easiest for me to grab without digging out and setting up my Amiga to pull the files off of my game's floppy.
The output of frotz is piped through tee. Tee takes the output and splits it to two different places. First, is the console so you can see what you're doing, but it also usually saves it to a file. I've changed that path to be piping the output to netcat, setup as a transmitter to the server.


In the perl script is a quick interpreter that I hacked together to run little micro scripts of code. I started doing this as an array of arrays, but if you've ever done that in perl, you'd know that such things are never good ideas in that language. I briefly considered hopping over to python to do it, but I already had stuff done, and was still toying with the idea of having the perl script itself listening to a socket, which i was unsure of how to do in python. So it's in perl.
Anyway, I switched it over to be a plain text blob in the source file. At startup, it cleans up the text, removing comments, and empty lines. It also breaks it up as two elements per line; the opcode and the parameter, and store that as the runtime program. This vastly simplified runtime routines.
The two main entry bits for the language are the label and the text match. It's sort of event driven, sequential language, with no nesting, no calls, no iterations, none of that... one operation per line. I did include comments though which are denoted by pound sign, # and continue to the end of the line. They can be put anywhere, as they are filtered out before runtime.
: label
Labels are used for 'goto' statements, or calling the goto function to set the current PC (program counter). If you call the doGoto() function, it will adjust the PC to the line after that label, or to -1, indicating that it was not found, and there's nothing to do.
? text to match
This denotes text that should be matched. As the program runs, it reads in byte by byte from the client and accumulates it. When it hits a newline 0x0d, or 0x0a, it sends the accumulated text to the "got a line" function. That one tries to match each "text to match" string to see if it matches any part of the current line. If it does, it sets the PC to the next line, and returns.
From there, there are a few opcodes that can be called:
seek 100
This will seek the current video file to 100 seconds in.
until 110
This will wait until the timer hits 110 seconds in in the video. Due to limitations of time, this is implemented as a hack. Instead of looking at the video file to get the time, it remembers the last 'seek' number called, does a difference, and sleeps for that many seconds, blocking. For the above two codes the "until 110" call will essentially sleep for 10 seconds
Indicates that a sequence of opcodes is done. "do until done" will stop here" this leaves the PC at -1, indicating it's done.
player play
This sends the "play" command to VLC. It can send anything. Useful things are:
play    # press play on the current videopause   # toggle pause!frame   # advance one frame. forces a paused statestop    # stops the playerfullscreen on  # makes the video full screen (on|off)seek 100 # you can manuall call this as wellrate 2   # twice speed... or "0.25" for quarter speed. etcadd FILENAME   # adds a file to the playlist, and switches to it
These essentially just print out the command, as the VLC shell is consuming the commands directly.

Determining match strings

For this I basically ran "frotz hhg.z3" and copy-pasted long lines of text from the game to match that seemed unique. It worked okay but was kinda tedious.
I was originally going to extract out the room names/descriptions from the z3 files using the tools, but gave up on that for this simpler, quicker approach.

Determining timing

So to get the second counts, I ran VLC, playing the video file, directly: "vlc episode1.m4v", and did a lot of typing in the shell of the above commands. I would type "play" to let it play, then "pause" or "frame" to get it to stop. You can get the current time with "get_time", manually seek to specific times like "seek 300" or differences from the current time, "seek -10" for ten seconds ago, etc.
I could use the GUI for this, but it shows time as mm:ss rathe than time as seconds only.
It was tedious, but it worked. An easier to use mechanism for this would be advantageous.


So yeah... it worked.  With a lot more work it could be made to be reliable, have a nice editor, and be easily streamlined into more games.  I'd personally want to see an integrated executable as well, to get rid of all of this netcat and shell weirdness, and just have one exe that runs, reading in a language file.  Game packages could be created with the language file, audio, video, etc.  But I feel like I've accomplished what I wanted to for this. 

Saturday, July 28, 2018

Digital Logic in Javascript...

I needed some time off from my main project right now, and I had this bug in my brain to try out some of this, so I did a thing.  It uses javascript, jquery, bootstrap, and css stuff to do the whole thing. I use some of this at work, so this was a great chance to learn a bit more about that tech.

So here's a link to the quickly and horribly named "Logicr", pronounced "lä-jik-ər".

It's essentially a tiny javascript library/set of classes that simulate digital logic.  I'm not going to start drawing out diagrams of class hierarchies because that would take too long to do.  Instead, i'll quickly explain a bit of it, and then explain what the interface is showing.

At the bottom level, there are objects that are "pins".  This is a thing that can be set with a low, high, or floating data level.  I made the level actually a value from 1-100, below 25 is "low", above 75 is "high" and in the middle is "floating".  When you read them, you get back a true/false value, and you can write the 0..100 level, or true/false.  I made it this way so that i can eventually simulate pullups/pulldowns in the circuits.

Those pins are included in "nodes", which have a collection of inputs and outputs.  One example of a node is a logic gate.  A typical gate has two inputs, A, B, and one output, Y.  When the nodes get their update() call, they do whatever math is necessary to read in from their inputs and set their outputs.   Currently the system supports the basic 1-2 input logic gates: Not, And, Nand, Or, Nor, Xor, as well as one-pin output "sources" that generate High, Low, and Float output.

Another node is a "clock".  This one toggles its output once a second (1000ms) between high and low levels.

Another type of thing in the system is a "wire".  Wires are just lists of connections "from" and "to" pins on nodes in the system.

In the above example, there's a clock node (XTAL1), whose output Y goes to the first LED, named "clk".  There's a second wire from XTAL1's output that goes to the input of a Not gate (Inverter), in-turn, whose output goes to the second LED, "!clk".

You may notice that they don't change instantaneously.  That's because the system is set up to update all of the nodes first, then update all of the wires.  This prevented having to actually generate a real directed graph, and deal with loops and recursion and loops and recursion and stuff.  It probably simulates propagation delay on real parts, but i'm sure it has its own unique quirks.

So every 50ms, it goes through and tells all of the nodes to do their math, then immediately goes through the list of wires, reading from their FROM and writing to their TO.

So for the clock circuit:

  • Nodes:
    • Clock XTAL1 running at 1000ms
    • LED D1, blue, labelled "clk"
    • LED D2, blue, labelled "!clk"
    • Not gate IC1
  • Wires
    • XTAL1.Y to D1.A -- connects the clock to the first LED
    • XTAL1.Y to IC1.A -- connects the clock to the not gate
    • IC1.Y to D2.A -- connects the not gate to the second LED

That's the basics of it.

But there's more.

I added a few input devices and output devices.  First is the switch.  The switch is essentially a html checkbox at its core.  When the switch node gets called to update, it reads the value directly out of the checkbox widget in the browser, and sets its output pin to the appropriate value.  Similarly, the LEDs are just a "div" that's made round and the right color (dim/bright red, etc) via CSS.  When the LED nodes are updated, they read the value from their input pin, then apply the appropriate "on" or "off" class to their html div.  The browser and CSS take care of the rest.

There's a couple other things happening in the circuit there too...

The amber switches are the A and B inputs to an XOR gate.  The A and B switches also go to the amber LEDs with the same labels, so that you can see what's going on there.  The output from the XOR gate goes to the green LED labelled Y.

The four red switches labelled 0x01, 0x02, 0x04, and 0x08 are run into a sort-of ROM, implemented as just a simple 16 value lookup table.  The ROM has 4 inputs and 8 outputs.  The 8 outputs are wired to the red seven-segment display node's "a"-"g" inputs, which light up the segments of the display on the web interface.  As you toggle the switches, the appropriate number is displayed.  The ROM data is basically a lookup table of which segments to drive for each value 0x0..0x9,0xA-0xF.  The equivalent of the 74LS47 ic chip.  I was originally going to build this circuit using logic gates, but it got really complex...  I may still do it eventually, once i have a better editor.

Mess with it...

In the interface, if you scroll down, you'll see an editor section at the bottom. Press the "Load Default" button to fill the text box with the circuit description that's running, and you can see all of the logic nodes and wire connections. Press the "Run Circuit" button to read in the JSON content from that window and rebuild the internal logic to use it.  The content there has to be valid JSON, otherwise it won't work.  The "Run" button will have a green border when it's valid, red when it's not.


So yeah. This was meant to be a quick test to see how well it could quickly be done, and expanded upon for a possible future project that I've been thinking about for a while. There are still more component classes I want to add, such as "packages" which can group together circuits... for example, creating flip-flops using gate logic, or building that 7-segment decoder, or more complex things...

I also would like to make an in-browser editor of the circuits.  I've been looking at jsPlumb as well as a few other solutions for doing "create movable boxes with connection magnets that are connected with wires" type of things.  I may just spin up something of my own, as having draggable boxes is fairly easy to do using CSS/JS and then connections are just SVG lines between them, etc...  Although i'm sure the nuances of the interface make it more complex than it seems...

Oh. and this is running in my tiny page thingy, llmin... It's basically a wrapper for a page that sets up jQuery, bootstrap, and some other stuff, so that I don't have to create that boilerplate every time I want to make a page to test stuff. ;)

Saturday, June 2, 2018

Finishing touches on LL530 v1.00 firmware

I'm currently cramming to get the first version of the firmware for my LL530 project out, so I can ship the first few ordered boards. (YAY!).

Quick backstory...

LL530 is a board with an Atmega32u4 (Arduino Leonardo/Pro Micro) and a few connectors on it so that you have a USB interface for Amiga/Atari ST/Atari mice, Atari controllers, and Amiga 500, 1000, 2000, 3000 keyboards.  Yes it even supports weird Atari controllers like Basic Programming's "Keyboard Controller" and Star Raiders' Video Touchpad, as well as the Indy 500 "Driving" controller that they made look identical to Paddles,... which are also supported.

Yes, there are other versions of portions of this, but I wanted more flexibility. I want to be able to just connect to the device, tell it that i have a mouse plugged in now, and it will just work... and in the future, auto-configuring the ports without even connecting... but i digress...

Yeah. I wrote a program in VCS Basic Programming via the Stella
emulator on my Mac.  Yeah. it's as cumbersome as it looks, but I'm
still freaking impressed that BASIC is even possible... it means only
one thing.. Warren Robinett is a god among men!
I'm getting off the subject here... okay...

Hey, Scott... don't forget the reason for this post...

Oh right.. Thank you Document McSubheading for reminding me.

No Prob..... now get on with it!

RIGHT.  So I'm trying to get all of the non-keyboard stuff working in one firmware.  I have a decent shell interface that lets you do all sorts of configuration stuff of the two ports, which I call "Port A" and "Port B".  I have a framework where you can pick the input device (joystick, mouse, paddle, etc) and then pick what it sends out to the host computer (mouse movements, joystick movements, various keyboard configurations.)

Now, the Arduino framework is kind of notorious for being slow in some respects.  That's the price you pay for ease of use.  One of the slowest is the port pin reads and writes.  For example:

digitalWrite( Pin3, HIGH )

 is a function that sets Arduino Digital Pin 3 as a HIGH value.  So internally, it looks at "Pin3", and figures out where on the ATmega microprocessor that maps to... The digital/analog pins from the Arduino are abstractions of the ATmega's actual ports.  for example digital pin 3 might actually be "PORTD" bit 5.  This function makes it so that beginners don't need to know that stuff... and what's more, all arduinos have digital pins 0, 1,2, 3, etc but generally map to different ATmega ports/pins... it might be PORTD bit 5 on a 328P, but it might be PORTG bit2 on a 32u4.  

Too much info.  TL;DR: digitalWrite() digitalRead() are an order of magnitude slower than just reading PORTG and masking off bit 2, which is near instantaneous.  (I should note that analog reads are slow, regardless, as the micro needs time to make the reading.)

I noticed this on the Eastman Theater sign project. I was using digitalWrite() calls to bit-bang out the data to the sign's shift register/LED drivers.  I was getting roughly 5 frames per second, when driving all 240 columns of 8 pixels...  When I switched it to direct bit writes, it went well over 60 FPS!

Mouse support....

I have all of the other controllers working, at least somewhat, and I've had a USB-Mouse interface using a 32u4/Pro Micro on my RaspberryPi/Amibian system for years now, so I knew it would work... but i was surprised when I finally got to working on the mouse device reading to find it wasn't working.  It was getting readings, but it was missing some... it was missing a lot of them.

To read a joystick, you only need to know "is it up now?"  "is it left now?"... not very time critical between readings.  However, for reading a mouse, you need to read all of the "gray code" sequence state numbers that it generates.  As it rotates left-right (the up-down axis is identical), it sends out, over two bits of the connector, a sequence of digits.

00 ⇨ 01  11  10  00 ⇨ 01  11 ⇨ ....etc
Moving left

00 ⇨ 10  11  01  00 ⇨ 10  11 ⇨ ....etc
Moving right

The mouse generates these regardless of you reading them. 

There was some slowdown in reading the mouse, so i was missing some of the readings... so instead of getting 00, 01, 11, 10, 00, 01, 11, ... I was reading 00, ... 11, 10, 00, .. 11, .. .. 01, ... 00,  which generated bad mouse movements, since those are sometimes recognized as left, sometimes as right, sometimes as completely invalid changes.

So it must've been the port reading?  Right?  That was my first thought too.  This thought was cemented further, when I tried out some stripped down firmware, which worked, but once i read from all three possible buttons, it would fail again.  I thought maybe the arduino framework was doing an analog read on one of those ports or something. I wrote some code to profile read/write times, and found that it was just as fast as expected.  (I had switched the port IO for this project to direct reads, without the framework long ago.)  (sidenote: Still not sure why commenting out button 3 had any affect... weird.)

So that wasn't it.

I eventually moved some simple mouse reading/generation code as a dedicated loop, which worked.. 

Next, I re-enabled my big switch statement that checks the input types, and checks the output types and does the right thing... to my surprise, it still worked.

I re-enabled some of the other polling routines... some are timing based, so they use the millis() function which of course has some overhead... and surprisingly, everything still worked.

Then I re-enabled the serial shell interface... and things broke... quickly!

But... the serial interface wasn't doing ANYTHING?!  what?!  All it's doing is first doing a
if( !Serial ) return;
The 32u4/Leonardo/Pro Micro will have major issues if you try to send out Serial data before it's done initializing that HID device.  This call just checks to see if the port is active.. I littered my code with them before Serial.print() blocks to prevent the thing from bricking again...  But even when the shell is active, i do a standard
if( Serial.available()) { .... }
to check if the user typed anything... then it pretty much returns...

The problem is... my assumptions were WRONG.

I threw some of these calls into my profiler, which is basically this kind of thing
startTime = millis();for( x=0 ; x<1000 ; x++ ) {    /* do the thing to test in here */}endTime = millis();
then it prints out (endTime - startTime)/1000, and you get the average time per loop.

if( Serial.available() ) { /* do nothing */ }

This took roughly 0ms per loop, so it's probably just checking a bit inernally...

if( !Serial ) { /* do nothing */ }
I thought this did the same sort of thing... Nope.  No matter what I did, 10ms.  Every time this is called, it blocks for 10ms.  No wonder that my mouse polling was failing! it was missing tons of gray code sequence states since the thing was almost constantly blocking in the "!Serial" check.


It's feeling better now. :D

Anyway... Here's some more info on the project in general:

Links for the project are in flux right now; once I get the first version of the firmware packaged up, i'll be setting up a proper website/hub for it with full documentation of the project.  Until then, here are some links:

Testing the system is kinda fun!  Although that 7800 controller is pretty horrible.

Tuesday, September 26, 2017

Arduino-Midi for Sim City Music Playback...

Roland Sound Canvas SC-55, hooked up through a hacked cable to an Arduino Uno,
which is connected to... NOT THE MACBOOK because I FORGOT

Was thinking in the car on the way to Interlock of a way to make the Sim City 2000 music thing better.

The project I'm talking about is to play the weird Sim City 2000 midi files at random intervals throughout the day, so that life feels more like Sim City. ;)

I have a python script that will play the songs on my laptop... (see below) but I want something more physical..

Roland MT-90. It plays midi files off of floppy disks to a builtin Sound Canvas.
Yes. I think it's a weird thing as well, but at least it sounds nice!

Was thinking of putting an Arduino inside of the MT-90, which would press the button sequence to: (switch on shuffle mode), (switch on random play), then (wait) then (play the next track)... (The arduino would press the buttons via relays, the way I did it inside the Yamaha Tape deck..

I was thinking that it might be nice to not have to hack the device, and have just a midi thing i could plug in to any midi device... I'd have it do the above with the MT-90, but it doesn't have any midi control sequences to control playback... just midi notes/tone generation stuff.

Then I took a step back and thought that if i just stored the midi files on a sd card, and just had the arduino play them itself out to a sound canvas, that'd do the trick! I found a 5 pin din cable, hacked in connections.... but I forgot my USB-C to USB adapter at home. Dang!

So it's not really a failure of a project *YET*, but it was a definite lack of success...

Here's the python script. It expects to be run on a Mac, with timidity installed, which will play the music. Also it expects the midi files to be in a "SC2000/" subdirectory. It's not the most elegant thing, but I hacked it together in a couple days...

#  an attempt to make real life more like playing SimCity
#  It picks a random amount of time from 15-90 seconds
#  if there's silence for the entire thing, it will pick a random 
#  track in the SC2000 directory and play it.
#  then it repeats... until you ctrl-c out of it
#  v2 2017-07-28 - made more configurable, quieter output, class
#  v1 2017-07-27 yorgle@gmail.com
# Requires: - timidity to be installed (brew install timidity)
#   - OS X (10.12 tested)

import sys
import os
import time
import random
import subprocess
sys.dont_write_bytecode = True

class SimMusic:

# defaults
midicmd = "/usr/local/bin/timidity --no-loop {} 2> /dev/null" 
mididir = "SC2000/"
silenceTimer = 0
timerMin = 15
timerMax = 90
disabled = 999999
timeout = disabled

# constructor
def __init__( self, tmin = None, tmax = None ):
if( tmin != None ):
self.timerMin = tmin

if( tmax != None ):
self.timerMax = tmax


def setupTimer( self ):
self.silenceTimer = 0
self.timeout = random.randint( self.timerMin,  self.timerMax )

def resetTimer( self ):
self.silenceTimer = 0

def stopTimer( self ):
self.silenceTimer = 0
self.timeout = self.disabled

def systemIsPlayingAudio( self ):
process = os.popen('/usr/bin/pmset -g' );
text = process.read()
if( "coreaudio" not in text ):
return False
return True

def playRandomMIDI( self ):

files = os.listdir( self.mididir )
fname = self.mididir + random.choice( files )

print( "Timidity: {}".format( fname ))
process = subprocess.Popen( self.midicmd.format( fname ), shell=True )


def run( self ):
playingTimer = 0
print( "scanning..." );

while True:
if( self.systemIsPlayingAudio() ):
if( playingTimer == 0 ):
print( "\n" )
sys.stdout.write( '\033[2K' )
sys.stdout.write( "\rAudio is playing... Waiting. ({})".format( playingTimer ))

time.sleep( 1 )
playingTimer = playingTimer + 1
playingTimer = 0
if( self.silenceTimer == 0 ):

self.silenceTimer = self.silenceTimer+1

if( self.silenceTimer > 0 ):
if( self.silenceTimer == 1 ):
print( "\n" )
sys.stdout.write( '\033[2K' )
sys.stdout.write( "\rShhh! {} of {} seconds has passed".format( self.silenceTimer, self.timeout ))
if( self.silenceTimer >= self.timeout ):
self.silenceTimer = 0
time.sleep( 1 )


# put this in your main app as well.
if __name__ == "__main__":
simMusic = SimMusic( )

Monday, May 1, 2017

RC2017/4 Final! (Replica Results and Post-Mortem)

"The Treachery Of Pop Pixel Art" - Scott Lawrence, 4/30/17
Inspired by Andy Warhol and René Magritte's works
As you might know, I spent time this past month working on what I nicknamed "The Andy Project". The idea was to re-create some of Andy Warhol's art pieces on an Amiga, well, an emulated Amiga, using appropriate tools, as pixel-accurately as possible.

This blog post will be a summary of results, mentions of pitfalls, and that kind of thing.  The previous three blog posts cover most of the other information for the project.  Links for these are on the bottom of this post.

Philosophical musings...

Andy's work is interesting because you don't see any other of the "classic" artists doing their work in such a media, where you can get pixel-for-pixel copies of the originals.  Nowadays, sure, you get photographs of the original, but with these digital art works, you can get essentially the EXACT same content that was originally produced. This is likely why the "originals" have never been made public directly... Understandably so.... That floppy gets out, and we see posters of pop-pixel-art available in every mall. That said, I want to make poster blow-ups of some of these. ;)

This actually got me thinking a lot about the resulting media files that I will be produced.  These will be virtually indistinguishable from the originals.  Are they the same as the originals?  In some of these, I am the person who drew all of the individual pixels in Deluxe Paint.  It is distinctly not the original files, which are still under lock-and-key.  But those may surface, and you may find that mine are nearly identical with respect to which color pixels are where on the digital canvas.

In the music industry, if you were to make a copy of a waveform, by looking at numerical representations of the wave, and typing them in to a computer, making a replica of the original waveform.  When played back, it's still Debbie Harry's "Rush Rush"... but you actually created it from scratch by copying the numbers from the original.  It's a replica... that is functionally and actually identical to the original.  In fact, if you were to write a computer program that did this for you; go through the audio file, and look at the numbers, then write those numbers into another file... then the computer has generated a replica of the original... That's exactly the same as I'm doing here, right?

That software already exists.. it's "cp", "copy" or "drag-and-drop the icon to another folder."

Does that mean that I cannot take credit for my reproductions?  This is kind of why I had wanted to add specific mistakes/changes to mine, so even if you were to compare them at the pixel level, mine would be at the least, a derivative work. ... but is that enough?  Couldn't Andy's estate come after me and say "that imagery is exactly the same as the original, and you're putting it up online for free! Stop that now!"

I know that mine are replicas...  perhaps that's enough.

But then, if you think about the pop-art movement of the '50s and onward, it delves DIRECTLY into this, head on.  Andy, for example, took iconic things (soup cans, Marilyn Monroe) and used them as the basis for his iconic works.  If the Campbell's Soup can looked different, so would have his artwork.

So in that vein... My reproductions are essentially my brand new art pieces, just using his pieces as the inspiration...

Wait.. are mine replicas of his works?  Are mine new works, like the originals under the pop-art umbrella?

I don't know how to resolve this.

The Andy Project

I started out with a list of 11 images to pick from.  I thought I might get a few done, but I never set specific goals for myself... so I only got three of them done.  I still feel like this is a major success. Mainly because I can now do many of the other ones at my leisure using all of the tools and frameworks I've already set in place, with full knowledge of the pitfalls for the various steps.

I also created two pieces that I was inspired to make from the project, so I included those in the ZIP and slideshow ADF as well.  Links to the slideshow ADF downloads are at the bottom of this blog post, along with an in-browser emulator that can play it in-browser.

Everything provided here is to be considered to be Creative-Commons BY-NC-SA. (Attribution, Non-Commercial, Share-Alike).

What went wrong

I think the big thing that went wrong was wrangling with tools. There are two instances that come to mind.
Nothing really went wrong here, I just thought it would almost
be appropriate for this to be my end-state of this image. :)
I feel like Andy might have accepted this.


First of all, my tool "ObeseBits" was giving me trouble.  It's the first Javascript-Processing app I've ever really made.  Most of the time for Processing, I just use Java mode.  That seemed a bit excessive, and I wanted to make something I could easily share, so I went Javascript instead. 

If you watch the time lapse, you may see bits like this roll through.
The frustration was real.
This is a fine choice, but I had a problem where... I'm not exactly sure what was going wrong, but I was re-drawing the entire screen every frame.  It seemed to be chewing on more RAM that it should, especially after it sits for 5 minutes... especially running in Chrome on a 8 year old mac with 2 gig of RAM.  Yes, I see the irony here that I'm creating these images on a machine that could do this with 1/2 megabyte of memory... hehe  Anyway, it would get so unresponsive that it would essentially lock up the machine, but it always seemed like it was JUST ABOUT TO wake up.  But no... spinny rainbow pizza instead...  Force reboots were my friends.

FS-UAE emulator

The other main issue was that I couldn't get the ADF file exported from the emulator I was working in!  Seems like a pretty straightforward thing to do:

  1. Mount an ADF disk image
  2. Diskcopy Amiga Workbench 1.3 to it
  3. Make space by removing printer drivers, fonts, utilities, etc that aren't needed
  4. Copy my "Files" folder from the virtual hard disk to the virtual floppy
  5. Set the startup-sequence to just add disk buffers and then run the slideshow (DPShow)
  6. Quit out of the emulator
  7. The ADF disk image file you have there has all of the above done to it, right?

.  I was using FS-UAE that, because of reasons that kinda make sense, treats all mounted ADF disk images as read-only, and creates an "overlay" in a separate directory with all of your changes.  So... here's the list of things I tried to get the modified ADF out of the thing:

  1. "apply" the changes to the ADF.
    • Can't do this.  The change overlay is in a custom format.
    • No tools exist to do this. (!!s)
  2. Turn on the UAE emulator flag "writable_floppy_iamges=1", which tells the emulator that it's okay to write directly to disk images for new disks.
    • Then you must mount a new disk, once this is set, diskcopy the content over, and you'll end up with an ADF with all of these changes. Right?
    • WRONG. FS-UAE ignores this flag.  There is literally NO WAY to write to an ADF that I have found directly from FS-UAE.  (let it be noted: otherwise, FS-UAE is freaking awesome.  this just really frustrated me.)
  3. At some point in here, I needed to edit the startup-sequence on the ADF, and I couldn't do it via the emulator's mounted-folder-as-disk-image, so i grabbed "Vim" from Fred Fish disk 591.
    • This required ARP for some reason, so I grabbed ARP and ran the installer.  
    • It replaced a bunch of AmigaDOS commands, added a library, etc.
    • Vi now worked.  Yay!
  4. Make a disk image to a file within the emulated Amiga.
    • Sounds ridiculous, but in fact, yes... it is ridiculous.
    • The only one I found that "should" work in AmigaDOS 1.3 was "TransADF"
      • Grabbed TransADF off Aminet, put it on my "Work13" disk (which is a mounted folder from my Mac). 
      • Ran TransADF, and... GURU MEDITATION
    • Fumbled for a while and realized I could boot an emulated AmigaDOS 3.1 system.
    • Booted up AmigaDOS 3.1, ran TransDisk, created the ADF, looked good.
    • Booted back into 1.3, and suddenly, on "loadwb", the system just locked up with a blank blue screen.
    • Eventually traced it back to ARP being an issue.
    • Luckily, my Work13: disk was being maintained by Git, so i just reverted the changes that the ARP installer did... and now it all worked again.  blerg.
So yeah.  Cumbersome as all get out, but I managed to get the ADF made, and I now know I can just boot into 3.1 and run TransDisk to do this again.

What went right...

I got three image replicas done! YAY!

I feel super proud of the fact that I was able to do it to the best of my abilities, to the best effort I could provide and do.  I tried my best to be pixel-perfect, to be color-accurate, and I think the results paid off really well!

You can see me work on these in a time-lapse on YouTube too!

What you see here are EXACTLY what I was doing this project to avoid.  Incorrect aspect ratio, wrong file format, etc. ;)  But they show the effort. I  left the Deluxe Paint panels up just for fun, and to show that these are in fact the ones that I made, and not the source Warhol material.  If you want the proper files, they're downloadable at the bottom of this file.

"campbells.pic" (As seen in Deluxe Paint III)
Original by Andy Warhol, Replica by Scott Lawrence

"Campbells" took full advantage of the foray into GraphiCraft, as I used the palette generated from the saved out Graphicraft 1.2 image, but then tweaked by hand to accurately match the pre-release palette that Andy would have had available to him.  I feel like this is a great representative pice of his work on the Amiga, as well as for this project itself.
"venus" (As seen in Deluxe Paint III)
Original by Andy Warhol, Replica by Scott Lawrence

I had originally thought that "venus" was going to be pretty simple. It was mostly the "venus.iff" file that is included with every copy of Deluxe Paint.  (Here's a link to the original as seen in Deluxe Paint IV.) Well, there's those two orange dots on the left, the scribble on one eye, and the third eye... so it should be easy.  Right?  Nope.  

  1. The source material that I had was a JPEG, with its entire color palette completely wonky.  Browns looked violet... whites looked yellow... that kind of thing
  2. This means that I had to spend time not just picking the right color, but also cross-referencing the pixel in the jpeg with another of the same wrong color, but unchanged by andy... then find THAT color in my DPaint version, and use that color.  
  3. The forehead.  I don't know how... I don't know why, but Andy replaced MOST of the forehead with the version that's there... He probably just grabbed a brush and pasted it down a lot. There's some evidence of this being the case along the left side. Perhaps I'll analyze it more for another blog post in the future.  I ended up having to wipe out the entire forehead and re-do it entirely.
I do feel it necessary to note that many of the pixels in this are "close".. not specifically the exact correct shade of the colors.  There are a lot of orange/browns that are VERY similar and with the process of cross-referencing mentioned above, I know that I made a few mistakes here and there on this one.

I had felt a bit uneasy looking at this one for a while, but couldn't place exactly why... but it turned out that the forehead's replacement was a huge subconscious "something's not right here" flag in my head that I couldn't quite place. 
"flower" (As seen in Deluxe Paint III)
Original by Andy Warhol, Replica by Scott Lawrence

This one looked like it was going to be fairly straightforward to make... and it in fact was!  There were a lot of stepped lines, and the source material was scaled up and fuzzy, but I was able to work around this.  I remembered that DPaint has a tool "Coords" (press | to enable) which shows where your cursor is in X and Y, and also length of lines you're drawing.  This took was invaluable for doing this replica.


Yes, I did notice that "Flower" and "Campbells" are missing the red rectangles in the upper right of the imagery on the provided ZIP and ADF files below..  Those are the screen raise/lower tools from AmigaDOS shining through.  Much like the "GraphiCraft" drag bar being visible, those should also have been visible. That's my mistake. The next version I cut of the slideshow will have that fixed.



Here's a list of links of products and stuff that may be interesting:
Everything provided here is to be considered to be Creative-Commons BY-NC-SA. (Attribution, Non-Commercial, Share-Alike). 

And previous blog posts about the topic:

Friday, April 28, 2017

RC2017/4 Update - ObeseBits is now online!

I've not gotten more images done yet, but I have exported the tool "ObeseBits" to the interwebs.  You can play with it here:

Press 'I' (uppercase) to pick a different image, 'G' and 'g' to change grid mode and settings, drag the mouse to move around in the image, and shift-drag to adjust the size of the grid.

I made it in Processing 3.3.1, in "JavaScript Mode".  When I got to this point, i just copied the entire directory to umlautllama.com, and there it is!

Enjoy!  Or dont! Whatever!