From: Robert M Ochshorn <mail@rmozone.com>
Content-Type: multipart/alternative;
	boundary="_1230592D-1B39-4034-821F-9AFA6625F8C3"
Message-Id: <259025DE-EB56-4711-A894-3053C73B870D@rmozone.com>
X-Smtp-Server: mail.rmozone.com:mail@rmozone.com
Subject: Re: looseleaf and hey!
Date: Mon, 20 Apr 2015 07:25:54 +0200
X-Universally-Unique-Identifier: 856F0E4B-B687-48B2-8801-CD8F4D137E69
In-Reply-To: <CABiTF5PQUg3sz6AOJxZdqXJ803Wz=boZz=0Ow=hHFr2sfiCboQ@mail.gmail.com>

--_1230592D-1B39-4034-821F-9AFA6625F8C3
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8
--_78E5B637-12C0-4033-9C18-CA15AC9AAD62
Content-Disposition: inline;
	filename=what-else.pdf
Content-Type: application/pdf;
	x-mac-hide-extension=yes;
	x-unix-mode=0644;
	name="what-else.pdf"
Content-Transfer-Encoding: base64
--_1230592D-1B39-4034-821F-9AFA6625F8C3
> On Apr 18, 2015, at 10:08 PM, Cixa wrote:
>
> what else are you upto, i am curious about your (other) projects
>

From: Robert M Ochshorn 
Content-Type: multipart/alternative;
	boundary="_D56FD236-504F-4C21-9D3B-0D9F79408655"
X-Smtp-Server: mail.rmozone.com:mail@rmozone.com
Subject: all the metadata
X-Universally-Unique-Identifier: DBD1F809-DB15-442B-8DD1-C9F2617BCAE3
Date: Sat, 18 Oct 2014 03:02:54 -0700
Message-Id: <4E7F0806-EE06-4570-8E8F-6E720515CB37@rmozone.com>

--_D56FD236-504F-4C21-9D3B-0D9F79408655
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

Hello Memory, World (of the)!


You of all people might appreciate this.


I missed the discussion around a paper [1] someone sent to my lab, but something in the “Schema-Independent Database UI” struck me as powerful, with regards to a unified and coherent mental model for relational data. (I was recently pointed towards Ted Nelson’s ZigZag, which could probably be considered some sort of counterpoint.)


Failing to find more than a glitchy video online showing Related Worksheets in action, I implemented a very rudimentary version of my own, testing it on a ~60K-row relational database scraped from a German media center/archive as part of a series of artworks for their website. My prototype is also exploring the embedding of an interactive representation of video timelines into the spreadsheet, as you can see in a screencast I made; if the video is too distracting (re: how the spreadsheet works), can see “just the data” here.


I’m hungry for feedback, though apprehensive about sending out the link, as it still feels somehow incomplete.




R.M.O.


[1] Ironically, I was with Glen at the time helping a film project imagine life beyond their “tailored database applications” in FileMaker.


From: Robert M Ochshorn <mail@rmozone.com>
Content-Type: multipart/alternative;
	boundary="_305A6A80-21A7-443E-99BF-E34BF0D76DCA"
Message-Id: <9984555A-C901-4AB8-A3A1-A7146B23EC94@rmozone.com>
X-Smtp-Server: mail.rmozone.com:mail@rmozone.com
Subject: Re: play around with book interface
Date: Sat, 27 Sep 2014 22:19:21 -0700
X-Universally-Unique-Identifier: B82A6EB0-9978-4C30-8F03-709733D32E8F
In-Reply-To: <DA619BEC-4296-46FB-AC20-1DE61B5BA4C6@uu.nl>

--_0220D235-6296-42ED-AA44-410E0BB2BBD5
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

Dear Stephanie,


Thanks for writing! You’re right to identify an irony in the situation: I showed you interfaces that are supposedly preoccupied with new modes of visibility, and yet they are nowhere “online” to be found. There were a few book/reading projects I discussed at BOM:


— you’re probably thinking of my PDF reader experiment (aka “looseleaf,” aka “BOOKS WITHOUT COVERS”). An early sketch, operating on the Vasulka PDF Archive, demonstrates some of its strengths (as well as weaknesses) on a heterogenous archive. I’ve made a few tweaks to the UI since then, and also have a simpler version for scrubbing through PDFs individually, which you can play with here.



— while hardly a “reconceptualization of the digital book interface,” I also demonstrated some text/image collisions. The most compelling illustration of this is the animation through all of the words of a neuroscience paper (grouped by size, and ordered by visual similarity):



To break it down a bit, here is a scatterplot of all of the small words in the paper:



If you look at this one carefully, you’ll notice that these “word-forms” are not made up of letters, but rather goop together. I’m using a statistical technique called Principal Component Analysis to represent each word by a vector, where each successive coefficient contributes less than the last. The first two coefficients of each word determine the x-y position in the scatterplot, and in this figure I’m only using 12 coefficients to draw each word, which isn’t sufficient for legibility:



I also showed an idiosyncratic PDF reader based on extending this technique to allow an absurd and rather convoluted substitution on each word in the document: word —> word image —> PCA vector —> k-means cluster —> cluster centroid PCA vector —> word image. The thing that interests me with this transformation is how the “training data” can burst out rather surprisingly into each page:



— finally, I had brought a copy of my Hyperopia book to Birmingham and tried to show myself thinking through the interface  There was one moment I fondly remember from the talk where I couldn’t remember one of my references and had to retrieve it by unraveling a long receipt of my preparations. That moment is a high point in my memory of the BOM presentation. Though I got some good feedback, I had the uncomfortable feeling of being dragged along by my work, rather than the other way around. In any case, it’s nice to hear that something stuck in your memory so many months later!



Please let me know if you’d like to play around with any of these beyond the links I sent and I will do my best to accommodate. For what it’s worth, I publish the source code to all of these projects. Documentation is lacking, but I can help you get started in case you’d actually like to run/modify any of these.


I hope you’re well and that we will have occasion to meet again soon.


Onward,

R.M.O.


From: Robert M Ochshorn <mail@rmozone.com>
Content-Type: multipart/alternative;
	boundary="_8D292EFC-C3AB-486D-BD70-2AF7B35F7AB8"
X-Smtp-Server: mail.rmozone.com:mail@rmozone.com
Subject: App/Territory v.3 - "Exploding Cello"
X-Universally-Unique-Identifier: 877CB1A7-0349-47CD-A4C3-FF3397A9077A
Date: Fri, 12 Sep 2014 04:50:46 +0100
Message-Id: <987592B8-58CF-4DF8-966E-676207130D5A@rmozone.com>

--_8D292EFC-C3AB-486D-BD70-2AF7B35F7AB8
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

Dear Rob, and I include Nicolas because he just wrote amused to find me on French TV reruns, and also because there’s a part of him in this project, to be sure.


The cello recordings you sent are gorgeous. I think they work even better than the sax. I’d so love to hear what you make with this, or any scraps along the way (you may want to filter the audio a bit, as some of the edges can be a bit sharp).



I ended up using three different “maps” for the sound, for three different sample banks. Since I partitioned the samples based entirely on numerology, I couldn’t tell the difference between any of the sections, so I tweaked the algorithmic weights/features used in each spatialization. Can you hear the timbral/harmonic differences suggested by each?


I hope the set-up will be reasonably straightforward on Hanae’s iPad (or whatever else you try it on), though getting these things to work offline is always a dance. Here’s what worked for me:

1. when you have a good Internet connection (the instrument weighs 33mb), go to the website and immediately save it to the home screen; 

2. close the web browser; 

3. open on the home screen and wait until the “Please wait” magenta indicator goes away;

4. close it and open it again —you shouldn’t see the “Downloading” bar this time (but wait for it if it insists); 

5. close it, switch to airplane mode, and open it again. Maybe it’ll work? 


Good luck with it this weekend!


Your correspondent,

R.M.O.


From: Robert M Ochshorn <mail@rmozone.com>
Content-Type: multipart/alternative;
	boundary="_D3D7657D-5F9A-44DC-B3C1-1F640F8DD283"
X-Smtp-Server: mail.rmozone.com:mail@rmozone.com
Subject: Brunch/Demo Followup
Message-Id: <ED370F49-9D48-46A2-A780-F389A9A6B38F@rmozone.com>
X-Universally-Unique-Identifier: E65F7E82-40E9-4A97-BA84-FCEA4FE5ACAF
Date: Mon, 8 Sep 2014 15:23:01 -0700

--_D3D7657D-5F9A-44DC-B3C1-1F640F8DD283
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

Dear Alex and Danny,


Brunch yesterday was great, and I really enjoyed our conversation afterwards at CDG—thanks for coming by. You asked whether any of the things I showed you were online, and I muttered something to the effect of “yes, everything is online somewhere, but I’d better compile the links for you.” Here goes:


— The conceptual framework for much that I showed you was already sketched out three years ago, when I started a research fellowship at the Jan van Eyck Academie (in Maastricht, NL). They didn’t accept me the first time I applied, so before applying again I tried to legitimate myself by doing most of the work I was proposing to do there before I even started. Funny how these things work. I posted some notes, links, and screencasts from my “opening week” presentation at JVE.


— While at JVE, I started an ongoing collaboration to make Montage Interdit, which was presented at the 2nd Berlin Documentary Forum in 2012. The work is still unfinished, but I have a screencast and a few stills online that may help explain some of our ambitions.


— MI gave me something more developed to talk about—a realized scenario coming out of my scattered prototypes—and after I spoke at VideoVortex9 (in Lüneburg, DE), the conference organizers asked if I would work with them on a “hybrid video reader,” which we finished at the end of last year. Meanwhile, I started playing with animated temporal maps of video and adapted my old timeline study into a piece for the 59th International Short Film Festival Oberhausen (which has an overcompressed interactive version and some text).


— I started this year with a six-month fellowship at Akademie Schloss Solitude (Stuttgart, DE), where I made the first version of the Hyperopia book (out of which I made you a receipt of your visit). It was a gift for CDG when I visited in February. I’m still figuring out the best way to get my new GPU-based timelines online (beyond a screencast), due to some bandwidth optimizations that still need consideration, but you can play with a webcam version that's languishing in a remote subdirectory (or, as with everything here, poke around to some illegible source code).


Apologies for my incoherent web presence. Be in touch!


Your correspondent,

R.M.O.