What if you, being here, could be a virtual presence anywhere?
Project Anywhere is an intention to breach the limits of physical human presence, by replacing kinaesthetic, visual and auditory with artificial sensory experiences, in a fully interactive virtual environment.
7x Cabin is the product of a 3 week collaboration between MAS CAAD ETHz and Tom Pawlofksi, that took place in October 2013. The purpose of the project was the adaptation of robotic fabrication technology to conventional design tasks such as the design and production of a hut structure. For that, we worked with a chainsaw mounted on a 7 axis industrial Kuka robot, which was programmed to process local raw and irregular timber and cut the mechanical connections between logs.
Tom Pawlofksi’s Rhino2KL plugin was used to export Kuka source code.
The cabin was reassembled and displayed at the ETHz Campus at Honggerberg during February 2014.
http://www.caad.arch.ethz.ch/blog/7xcabin-robotic-log-processing/7xCabin
(Source: vimeo.com)
MAS CAAD ETHz
Theory Module 1
Dr. phil. Vera Bühlmann
Music and images: Constantinos Miltiadis
All numbers, colors and shapes generated through processing Deleuze & Guattari’s “Geology of Morals”.
better late than never, the English book to my thesis

A video of crowd behavior simulation in an existing context.
The agents have to pass from one checkpoint to another. In the course of that, they interact with each other and some interesting patterns emerge.
The algorithm was developed with object-e office, using python script in Real Flow.
object-e, August 2012
with Dimitris Gourdoukis, Christos Gourdoukis, Spyros Efthymiou, and Giorgos Anagnostopoulos
(Some will say this is not the time. I disagree. This is the time when every mixed emotion needs to find voice.)
Since his arrest in January, 2011, I have known more about the events that began this spiral than I have wanted to know. Aaron consulted me as a friend and lawyer. He…
This is an agent system based on Craig Raynolds’ boids algorithm. My version is pretty simple, it only has the 3 rules and some boundary constrains. No fancy steering or viewing angle. Other than that, it includes some functions to send particle positions to Grasshopper and have them there as points.
other features are:
you’ll need: peasycam, oscP5 and toxic libs (i provided the links in the processing file)
download link for Processing sketch and Grasshopper definition
*I got somewhat familiar with particles last August, when myself and 3 colleagues ditched our holidays and went to work at Object-e with Demetris Gourdoukis, to produce some agent based urban analysis tools. There we worked with max script, max’s particle flow, and Real Flow. Check the Object-e site where these files will be uploaded sometime soon.
Here is a booklet (in Greek) and a video about my thesis project:
Open publication - Free publishing - More architecture
better go for 1080HD in the video

Finally! I presented my thesis project 3 weeks ago, and finished my studies (yay). My project was about designing a structure that could interact, move and adjust to the circumstances (there you go, trendy terms: interactive and kinetic). This was solved through programming; it couldn’t otherwise be conventionally designed if you wouldn’t wanted something that potentially moves, but something that you could make to actually move (CAD software does not support either the dimension of time or procedure design). So one part was trying to solve a structure that can do that, and the better part was coding; design the function-design for function.
So I began writing a program that would do that, and ended up with around 3000 lines of (processing) code. The hole bunch controls the movement/transformation of the structure; recognizes people in space (using the Kinect sensor) to interact with; sends data to a CAD program to draw the representational model (of its current state); and also outputs commands to Arduino for it to make some motors move.
To do this, I used a lot of stuff that a lot of very kind people opened to the web; firstly the Processing programming language and IDE, and the Arduino mircrocontroller; and some things, or better “tools”, called libraries, such as toxic libs, peasy cam, simpleOpenNI, oscP5, blobDetection (I am sure I forgot something) and also other tools for Grasshopper called plug-ins, such as gHowl and Kangaroo physics. These I found in the internet and used for free, and if it wasn’t for these people (the open source community in general) and the fact that they were for free, my project wouldn’t be possible…
This is how things work and develop in this field. Some schizo produces the framework (ie Fry and Reas who made processing) and the thing grows and forks allover the place. What this essentially means, is that given that you have in your hands an object that is free, of which you are granted access to its plans (its source code), for some strange reason emerges a social trait of creative solidarity: offering your work back to the commons. Its strange because we rarely observe this behavior either in real life or other sides of the web social. If I may note an example from the architectural world: one can rarely find free rendering materials, created by a third party, to use in his/hers project; and we can agree that its not too difficult or time-consuming for someone with some degree of expertise to create a material. On the other hand, the open-source culture is you can easily find either tools (libraries) or examples of code for pretty much everything you can imagine… and also free of any charge. Visiting the fora for Arduino and Processing you should see people trying to help each other, solve each others programming problems and exchange chunks of code. Furthermore, if you ask a question in any of the open software/hardware based fora, I guarantee that you’ll get quicker, more responses than any of the marketed software fora that are populated by users (even quicker than asking licensed software’s support staff).
And the thing here, is that these people don’t know each other in person, and probably will never meet. They could live in different countries, have different backgrounds, mother tongues, living standards, and belong to radically different social classes (and this is actually the case most of the time). In real life these people would never interact, even if they were not remotely positioned. But this is a by-product of the internet isn’t it? Ok, yes internet brought us close. But the massive non-commercial collaboration between us was never envisioned before open-source hit the “general public”. And this is the surplus value that comes along with open source. Take money out of the equation and we are all classmates.
The thing is not only that we (inherently) tend to help each other; help reach a common higher state of abilities (for example becoming all good programmers), this is only the the straight-forward value of OS. The other, what I call surplus, is the quality that describes assemblages; (among others) being more than the sum of its parts. An example: A population has an average IQ of 80; the highest IQ of the group is say 120. Helping each other (education maybe) strives to make everyone smarter, and rise the average to approach 120. Everyone being smarter means that everyone would be better on his/hers own, in his/hers own field; and everyone being better at what s/he does, means that everyone will produce stuff that would benefit the whole. This is primary linear interaction between the group and an individual: receive-return. Now is when things get kind of complex. I was saying how everyone can produce something that is great, but this is actually limiting, because this group -its parts- can produce stuff that can only reach “120-great”. So if you start mixing qualities, mixing together everyone’s knowledge you get the surplus. Nobody knows all the stuff on wikipedia, but if one counts everybody separately and do an addition, we certainly do; not only do we know wikipedia, we know hundreds of times more stuff.
Eventually, it doesn’t really matter how smart you are; if you contribute or not is what matters. Contributing to the whole for people to borrow, edit, improve and come up with different end products, is what makes the assemblage. The assemblage is more than the sum of its parts, because the assemblage does not have 120 IQ grade, it has 120 to the nth power… (somebody called Gilles said this first)
The open source community is an assemblage; its a creative avalanche. Being part of it raises its power to n+1. In the end, being part of it or not, at some point you will definitely benefit from it (don’t you visit wikipedia? you are not obliged to contribute so as to be allowed to visit and read).
So similarly, in the process of working for my project, I did produce some stuff that could be useful to other people. If I gave them the material and (note) the permission to use/edit/republish (the basis of GPL-license), they could utilize it for projects they do in (probably vastly) different fields from mine. And so I did; I open-sourced 2 things so far: sending point arrays from processing to grasshopper, and some manipulation methods and infrared people tracking for the kinect sensor that can be found here and here. (i got some more things to leak when I find some time, such as an agent system; how to have more than one windows in P5 and a method to control stepper motors from processing without programming arduino).
Now, I have no idea if somebody is interested in my stuff, or if anybody has already used something of mine. I would be happy if someday somebody benefits from my work. I’d even be happier if somebody takes it and improves or enhances it (adds his/hers n+1) in the cause of his/hers project, because then I would know that a. what I did is usefull and b. I would be offered something that goes beyond what I was able to do or think.
This is how things work for open source. This is why they happen fast, and in every direction. Somebody makes a tool, somebody else takes it and uses it for something else. The creator, or the n=1, could never know the use that people after him/her will give to the tool; or better said: “the inventor doesn’t know the invention”. People will find the (multiple) use for the invention.
Another example. Some years ago Nintendo made Wii (the game-platform with which you can play tennis holding a remote control in your hand). The main thing was its control, the wii-mote. This little thing has an infrared transmitter, bluetooth, accelerometers and gyroscopes, but because its massively produced it only costs something like 30 euros or so. Well, some people saw potential in this thing other than wiggling it around and playing virtual tennis (or whatever), and so they begun a process called reverse-engineering to try to (hardware) hack it and use it for something else. A lot of people worked for this and finally did it (http://wiibrew.org/wiki/Homebrew_Channel). They do offer what they did for free so it can be used by anybody to do anything. Now, the thing is that what they did is actually kind of illegal. The wii-mote is a proprietary hardware and can legally be used only as a tennis-whatever-remote-control for the games that Nintendo publishes for you. So the wii-mote as ingenious as it (really) is, is not a contribution, or a tool, its only a product restrained to consume Nintendo stuff (proprietary product and use too).
Unlike the wii-mote, the Microsoft Kinect followed a different path. So a couple of years back Microsoft made the Kinect for the Xbox 360 (another game console). This is an infrared/rgb camera that can see stuff/people in space. This means now that with Xbox, you can play tennis without holding anything! Fantastic! Regularly some people thought that it might do other (equally) useful stuff rather than tennis without remote, and got into the process of hacking it… Note that Kinect was some years after the wii so Microsoft knew its example. Some months after it was released, and it was known that some people were already disassembling it, Microsoft decided to release the source code (of the device data output) for people to be able to use it beyond their gaming console legally. Eventually Kinect is a contribution. When you buy it, you do buy the hardware without restrictions for its use. It now is a tool, and anybody can find a use for it (parallel to being presumably patented hardware). Furthermore, Microsoft did open a framework called XDK for people to develop legally their own software for Xbox too. And the thing really flourished. The projects that people conceive of and undertake are unimaginable.
The previous examples refer to tools, that can be taken and given use. They are usually developed in the course of larger projects, and are offered as autonomous objects, while the “larger project” the end product is not offered at all. This less than full-blown open sourcing. Determined open source kind of people, do offer everything. A nice example for this is Lady Ada, a talented engineer who has released a ton of complete projects for free. Stuff like an mp3 player, an oscillator-synth, a led message thingy and many many more original and interesting projects. She is one of the leaders of what is called “The makers movement” and she made it to the Wired (mag) cover. But Lady Ada didn’t actually made any money selling the stuff she designed, and actually is getting ripped off by some people who take her designs, mass produce and sell them. Though, she keeps doing what she does best. The questions are why does she continues doing this, and how does she actually survive. Well, there is a great lecture she gave justifying the why, which you can watch here. And how she survives? She owns a company called Adafruit that sells component kits for people to build the devices she designed. So she sells stuff like circuit boards, chips, microcontrollers, resistors and capacitors. You’d thing that she must be having a hard time but I will tell you she is not getting evicted any time soon. She earns money selling these building kits, which are offered for significantly less than the price of the end product. Both ends benefit; its a win-win.

Now back to my project. As I said I designed a structure that can move, and wrote a program that controls it. The structure is a pyramidal space-frame surface consisting of hydraulic beams, that has a lot of capabilities to move between boundaries. My program computes the transformations and sends signals (pulses/PWM) to a distributor or motor (hydraulic compressor) which feeds the beams so that they change their length. I don’t even know if this thing would work for real, I think it will, but I don’t have the means to construct and test it. If I opened the source code to the web, somebody could download it, take a look at the program, improve it and build it. Pretty good.
Thought… I lean towards not doing so, and I’ll tell you why. Say it actually works, its a big deal, and some people see some potential in it and want to build it. But my project is not a 40 $ mp3 player; it would cost some tens of thousands $ -or more- of pneumatic beams, compressors, sensors, metal joints, and probably additional professional programmer work. Somebody interested in building it would certainly not do so to benefit the whole, but to invest, to make money out of it. This purpose contradicts why I bothered working on the project in the first place…
The purpose I have for my project (besides some secondary performative applications) is kind of different, and has a social dimension. Its destined to be placed, as an installation, in public space; I specifically proposed to set it up in Omonia Square in Athens. My presumption is simple and it goes as follows: setting an object in public space, which has interactive capabilities, the ability to do something relatively to what people do, be actuated by them and provoke them too, would act as a catalyst for the social. This kind of blind object that only sees people (and not color, class, age or anything of that sort) offers back to them an equal capability to affect it, which is what -in my opinion- creates a social bond in space that didn’t exist before. Regarding use, this this is kind of “useless” it doesn’t do something, except from a collective morphogenesis maybe (which probably is not of much interest), but what is actually gained is the surplus social, produced in the process of interaction. Interaction between people and the structure, interaction among people themselves; unrelated people that would otherwise never gather together, take part in a collective process (form-finding or just play) that bonds them hopefully for more than the duration of the interaction.
Why did I get in so much trouble to do this?
Because a funny thing that moves would work as a “particularity” for a place that does not have, or has a loose spacial character, and thus is not regarded by people as a “loci” in the city, but rather as a place that one has to pass from to get from A to B. This will probably attract people to come and stay for a while, not only because of the installation and what it does, but because the installation offers a character to that place that distinguishes and transforms it from generic to particular; that works as a landmark.
Furthermore, I believe -at least for my country- that the public has failed. It failed as space and as a common; it constantly loses ground in the battle with the private and the commercial. The fact is not only that a spacial quality disappears, but a general one; the sense of the whole, the commons, the public or general ownership of the public (which is different from state ownership). That being the case, I wanted to -artificially- try and bring back the sense of social in public space, among co-located and unrelated people that could reflect back on metropolitan life in general.
So that’s why I think I will not open source my code. Because if its built it will be used in applications aiming profit; and the thing is not that I don’t get my share, but that my work will get twisted as just another flavor of the week by somebody who won’t even care who/what/why.
I will certainly post some chunks that I consider useful, as well as my presentation (drawings, renderings etc) so stay tuned. However I am still thinking about opening the source code or not.

(or in simple terms, 3d blob tracking from kinect infrared sensor in processing)

Since the last time I had to enhance my kinect functions, to a large extend so as to have more options while programing for interaction. Some of these are record and playback, and blob detection. I could not find any examples of blob detection of Infrared data, and because I want to detect stuff in XYZ I couldn’t use the RGB camera. So I wrote some functions to for that using an off screen buffer to draw points in plan, and feed that image to the blob detector.
The features include:
Transformation functions for adjusting the position of the sensor in processing with that of the real sensor in your installation (real transformations without matrices):
Record/playback functions:
Blob functions:
The libraries I used (the developers of which I would like to thank):
Download p5 source here (broken link fixed)
notes:
Have fun and share alike :)

Adaptive Spaceframe: Photos and Details



Fire ball
Buddha Compilation I
(Some will say this is not the time. I disagree. This is the time when every mixed emotion needs to find...

Rhino+Grasshopper Parametric Tutorial

triplecanopy: TV Helmet (The Portable Living Room) by Walter Pichler, 1967, Small Room (Prototype 4)