This year XP Day is bigger and better than ever before. There are now six (count 'em!) tracks of sessions, which has allowed the conference to both grow a little larger and have more focused sessions.
I will be running an introductory tutorial with Romilly Cocking and Steve Freeman on Test-Driven Development with JUnit and jMock 2. Although introductory, the tutorial will go beyond the basics and explain how use TDD to guide object-oriented design and how to recognise and avoid common misuses of TDD and mock objects.
I will also be at XP Day Benelux where I will be running a tutorial with Steve Freeman on "Sustainable Test Driven Development". This is a more advanced, hands-on, experiential tutorial that shows how to write tests that are easy to read and so clearly explain what a codebase does and why.
XP Day always sells out early, so if you want to attend you'd better book soon.
Update: XP Day London has now sold out. However, if you want to attend you can join our waiting list to be informed if a place becomes available.
A Christmas Quiz Game. I want a quiz game to play at Christmas. The most important feature is that I don't want to have to prepare the quiz beforehand. The second most important feature is that the quiz should help avoid family arguments by keeping scores. Exactly how the game is to be played is part of the challenge. What is the quiz about? Is there a quizmaster? If not, how do players make their guesses? Do all players try to answer the same questions or do they take turns?
I have been writing a location-aware music player in my research project at Imperial College, so I had quite a large collection of music on my laptop in Ogg Vorbis format categorised by artist, album and track and I knew how to use the GStreamer library in Python to play that music. So I decided to write a an automatic version of Name that Tune. The idea of the game would be to play ten seconds of a random song and the players would have to guess the artist and song name. This time Ivan tried the challenge as well on his laptop so we swapped ideas as we worked.
The GStreamer APIs are quite complex, dealing as they do with the asynchronous playback of generic media streams. While exploring my music collection I discovered I had a command-line program called ogg123 installed, which plays a single Ogg Vorbis file. We changed our plans and decided to run ogg123 from a command-line Python program and so avoid the difficulties of writing an event-driven application -- we were pressed for time, after all.
To play the first ten seconds we planned to run ogg123 in the background, sleep for ten seconds and then kill the background process. However, on reading the help text for ogg123 we found that it had a command-line argument to play the first n seconds. Our code became much simpler: there was no need to run the process in the background now, so we could concentrate on recording the scores and managing players.
While writing the program we made sure that it was always in a working state. We started with a simple program that picked a random music file and played ten seconds. We then added code to print out the artist and song name to act as a question. Next, we added a list of players, passed in on the command line, and made the program ask each player in turn. Then we made the program keep track of the scores for each player and print out the scores after each question and when the program exits. Finally we rotated the quiz-master responsibility between the players: when a player answers a question they become the quiz-master for the next player.
We just got the program working in time and didn't have time to clean it up. The code is at the end of this article.
At the end of the challenge there was some controversy as to whether our solution actually passed the first requirement: that I should not have to prepare the quiz data beforehand. After all, I had to rip the Ogg files from my CD collection onto my laptop's file system before I could run the quiz. If someone had done the same thing with their iTunes database we would have accepted that as a solution but because I don't store my main music collection on my laptop we conceded that our solution didn't meet the requirements. (Update: Carlos Villela has created a solution that controls iTunes on MacOS X.)
The winning pair wrote a similar program in Perl. Instead of playing music they asked questions about films. Their solution was ingenious: they screen-scraped IMDB to get the name of the film and then presented several questions about the film: what was the genre, when was it made, who were the starring actors , and so on. To verify the answers, the quizmaster switched to their web browser: the program had opened the IMDB page about the film in the browser while the quizmaster was asking the questions!
Here's our code:
#!/usr/bin/python import sys import os import subprocess import random from itertools import * class Track: def __init__(self, artist, album, track, file): self.artist = artist self.album = album self.track = track self.file = file def __str__(self): return self.__class__.__name__ + str(self.__dict__) def all_tracks(root): for artist in os.listdir(root): artist_path = os.path.join(root, artist) if os.path.isdir(artist_path): for album in os.listdir(artist_path): album_path = os.path.join(artist_path, album) if os.path.isdir(album_path): for track in os.listdir(album_path): if track.endswith(".ogg"): file_path = os.path.join(album_path, track) yield Track(artist, album, track, file_path) def run_turn(player, tracks, scores, time=5): track = random.choice(tracks) print "Can", player, "guess this track in", time, "seconds" print "Artist:", track.artist print "Album: ", track.album print "Track: ", track.track subprocess.call(["ogg123", "--quiet", "--end", str(time), track.file]) answer = None while answer != 'y' and answer != 'n': sys.stdout.write("Correct? (y/n) ") answer = raw_input().lower() if answer == 'y': scores[player] = scores[player]+1 print player, "has", scores[player], "points" print "Pass the computer to", player print player, "press Return to continue" raw_input() def find_winners(scores): best_score = 0 best_players =  for player, score in scores.items(): if score > best_score: best_score = score best_players = [player] elif score == best_score: best_players.append(player) return best_players def print_scores(scores): for player, score in scores.items(): print player, "scored", score print "" winners = find_winners(scores) if len(winners) == 1: print "the winner is:", winners else: print "the winners are:", ", ".join(winners) tracks = list(all_tracks("/home/nat/music")) players = sys.argv[1:] scores = dict( (player,0) for player in players ) try: print players[-1], "asks the first question" print players[-1], "press Return to continue" raw_input() for player in cycle(players): run_turn(player, tracks, scores) except KeyboardInterrupt: pass print "" print "" print "Final scores:" print_scores(scores)
Here is my solution to the the Presentation Package challenge from the SPA 2007 Scrapheap Challenge workshop. The challenge was:
A Presentation Package. I want to be able to type in a list of sentences that summarise what I will talk about during each slide of the presentation. For each summary the tool should suggest pictures that illustrate the point I want to put across and let me pick one picture per slide to build a presentation. It shows that presentation full-screen.
To write a solution in 90 minutes I used as much of the infrastructure of the GNOME desktop environment as I could.
The user writes their slide summaries in a text file using the Gedit text editor. For example:
Scrapheap Challenge is a workshop about using other peoples software We have created a scrapheap for you to use called the Internet You have to work in pairs You will be given three challenges The first pair to complete the challenge wins Then swap pairs before the next challenge We will have a short retrospective after each challenge And a long retrospective at the end of the workshop Prizes will be awarded on completely arbitrary criteria
I wrote a little Python script that turned those summaries into comma-separated tags and used a Python API to the Flickr search webservice to pull down ten pictures that matched each set of tags. I chose Python because I know it well, it has a large standard library for doing internet stuff, and it lets you write terse but readable code, which is good when you want to get a lot done in a short time. I chose Flickr because it contains a lot of stunning photos, Google don't provide automatable search APIs any more and I've had problems with the Yahoo image search in the past.
The script is below. It's what I wrote on the day in 90 minutes while experimenting with the Flickr API so it could be tidied up but I think it's still pretty readable, which is one of Python's big strengths in my opinion.
import sys import os from itertools import * from urllib2 import urlopen from flickr import photos_search BatchSize = 10 fluff = set([ "then", "there", "with", "have", "will" ]) def search(title): words = set([word.lower() for word in title.split() if len(word) > 3]) tags = ",".join(words - fluff) return photos_search(tags=tags, tag_mode="any", sort="interestingness-desc", per_page=BatchSize) titles = [line for line in [line.strip() for line in open(sys.argv).readlines()] if line != ""] results = [(title, search(title)) for title in titles] os.system("rm -rf slides/") os.makedirs("slides/chosen") for (title, photos), slide_index in izip(results, count(1)): print title slide_dir = "slides/choose/%02i - %s"%(slide_index,title) os.makedirs(slide_dir) for photo, photo_index in izip(photos,count(1)): url = photo.getURL(urlType='source') print " Loading ", url data = urlopen(url).read() local_file = slide_dir + "/%02i.%02i - %s.jpg"%(slide_index,photo_index,title) open(local_file, "wb").write(data)
The script creates two folders, slides/choose and slides/chosen. Under slides/choose it creates a folder per slide, named after the summary of that slide:
For each summary in the user's text file the script downloads ten photos from Flickr that have any tags in common with the words in the summary, ordered by "interestingness", whatever that means. The downloaded photos are saved into the appropriate folder under slides/choose:
The user then opens the slides/choose and slides/chosen folders in Nautilus, the GNOME file manager, and drags one picture per slide from the subfolders of slides/choose into slides/chosen:
To give a presentation, the user opens the slides/chosen folder in Nautilus and double-clicks on the first slide to open it in the GNOME image viewer. Hitting F11 in the image viewer shows the slides fullscreen. Hitting Space shows the next slide in the folder. The user can also navigate forwards and back with the Page-Up and Page-Down keys.
The final presentations are surprisingly good.
Ivan and I ran our Scrapheap Challenge workshop again at last week's SPA conference. This time we were hoping to get the participants to invent their own challenges in an Improv-style brainstorm at the start of the workshop, a section we called "Who's Line of Code is it Anyway?". Unfortunately this attempt was a bit of a flop, possibly because everyone had to get up early on a Sunday to get to Cambridge while the train services were cancelled and so weren't in a very up-beat brainstorming kind of mood, or more probably because neither me nor Ivan had any experience of Improv whatsoever.
Luckily we had some pre-canned challenges in reserve, so the workshop wasn't a total washout:
This time the challenges were more open ended and the applications were more interactive than the workshop we ran at PoMoPro. Dynamic languages that played well with other software and webservices won out by a slight margin – Perl being the most successful – but Unix pipes-and-filters were not very useful.
Here are what the participants found helped their efforts:
Here are what the participants found hindered their efforts:
Normally we work through the challenges before running the workshop to make sure they are achievable in the time available. However, we hadn't done so this time because we were hoping that we wouldn't have to use them. On the plus side, that meant we were able to participate in the challenges during the workshop. Ivan has already written up our solution to the Real-Time Sloppographer. I will write up our solutions to the Presentation Package and Quiz Game in later articles.
Update: I have published our solution to the Presentation Package challenge.
Update: I have published our solution to the Quiz Game challenge.
Steve Freeman and I are giving a presentation at QCon this Friday. In the schedule we're listed as presenting "Mock Roles, Not Objects", which we've presented several times before. This time we are going to focus more on how to use TDD with Mock Objects to guide the design of object oriented software. We call this "Listening to the Tests". Other people talk about "Test Smells". Taking the mixed metaphors to heart we've retitled our talk "Synaesthesia: Listening to Test Smells".
The results were quite similar to our previous experience at OOPSLA in 2004: shell pipelines and dynamic languages that can easily run other programs won the day. Unlike last time, a few pairs had some success with Java. However no winning solution used Java and no all-Java solution was completed within the time frame.
As Ivan noted, the process that pairs followed had as much if not more of an effect as the technology they used. Unlike last time, not all pairs actually did pair programming. Some split the solution into parts and worked independently on separate modules and then brought them together for integration. Pairs that did pair programming did noticeably better than those that split the work into individual tasks. From what I observed, I don't think that pair programming itself was the decisive factor. Pairs that split the work were hit by integration problems at the end of the challenge that were too great to solve by the deadline. Pairs that pair-programmed grew the software incrementally and always had a running program. The only pair that did well when working independently integrated every 15 minutes or so instead of once at the end.
A valid criticism of the workshop was that our challenges were could easily be met by batch programs and involved a lot of text processing, so they were biased towards solutions using Unix-style pipes and filters. Next time we run the workshop we'll throw in some challenges to build interactive desktop applications so that Unix pipelines don't have such a strong advantage. If you're going to be at SPA 2007 on the Sunday, brush up on your GUI skills and show those Unix developers what real programming is all about. I hope some Smalltalk, LISP or J developers will turn up and demonstrate the obvious superiority of those development environments (instead of just talking about it).
Inspired by fairlygoodpractices.com, informativeworkspace.org and various conversations at XTC I create a poster about Fairly Good Practices for XP Day. The poster showed a some good practices from teams I've worked with or talked to.
I left space on the poster board for other delegates to pin their own ideas. Here's what had been pinned by the end of the conference:
Chris Cottee responded by creating his own ad-hoc poster about Staggeringly Bad Practices seen on real projects. Sadly his poster attracted more contributions than mine even though it started out as one hand-written index card. Here's what was on his board by the end of the conference:
There's no point in putting code into source control until we know it works in production.
Every time you have a "bad" day drop a practise and get everyone to reestimate everything until people begin rocking back and forth under tables.
We lost the source code and decompiled from production.
Thanks to everyone who contributed... whoever you are.
A lot of the literature about TDD talks about tests and about development, but what about that second "d"? What does "driven" really imply?
If you want to find out and are going to be at XP Day this year, the session Steve Freeman and I are running is the one for you! It's a hands-on, hands-on-keyboard, workshop-cum-tutorial for developers who have already tried TDD but want to learn more. You will need to bring a laptop and a USB memory stick will be handy too.
I won't say any more than that until after the event because I don't want to give anything away until the day.
I recently attended the Google London Test Automation Conference in Google's plush London offices. Much fun was had meeting people and sharing ideas over table football and free beer and there were plenty of presentations, all of which can be viewed online. There was even testing advice presented while you recycled the refreshments!
The presentations that stood out for me were Goranka Bjedov on Using Open Source Tools for Performance Testing, worth watching for the snappy one-liners alone, Robert Chatley and Tom White on LiFT, a framework for Literate Functional Testing, and James Lyndsay's lightning talk on Automation for Manual Testers, the takeaway lesson being that manual testing is about using your head, not your hands and benefits from the judicious use of automation.
Steve Freeman and I also gave lightning talks about jMock, Steve presenting jMock 1 that had been mentioned a few times during the conference but not actually demonstrated, and myself presenting jMock 2, the first cut of which had been committed to CVS the night before the conference. The jMock 2 screenshots are cannot be seen clearly in the online video, so here they are as stills:
A test using jMock 2, showing the Mockery, strongly typed mock objects, and literate expectations defined by references to real methods.
Autocompleting on mocked methods when writing expectations.
Creating a new method by applying the IDE's quick-fix to an expectation.
Refactoring a mocked method from within a test.
More examples of jMock 2 can be found in the acceptance test suite.
When we describe Scrapheap Challenge as a "workshop", we really do mean "work". But enjoyable work. It's for software development practitioners, not for the post-technical. This workshop involves programming, writing fun, non-trivial applications that integrate existing software components and then reflecting on what helped and/or hindered the task. Last time we ran it I learned a lot of useful design techniques and approaches that I've since applied successfully on my own projects. I expect to learn more this time round.
PoMoPro will be held in London on the Saturday 25th November, the weekend before this year's London XP Day conference, so you can catch both conferences in one long weekend. Places are limited at PoMoPro, so book now to avoid disappointment!
It's that time of year when a few foolish souls gather in a pub in London every Tuesday evening and start planning what will eventually become XP Day 2006. Yet again I've got myself roped into it. Will I never learn?
This year the Programme Chairs have chosen to have a stream for experience reports, both of successes and failures. Have you had difficulty implementing XP, Scrum or some other method? What did you find difficult and what did you do about it? If you have a story to tell, we want to hear about it.
We're also planning on having poster presentations during coffee and lunch. Anyone who is attending the XP Day conference can have a poster displayed to publicise an interesting idea or technique, ask an awkward question, solicit information or... well anything really.
The Call for Proposals has been announced and the submissions system has been streamlined so it's easier than to log onto the wiki and submit a session idea. Like last year, we're using an open peer review process: if you submit a session or poster you also vote on and review other submissions during the review period.
And finally the website has had a makeover. You can subscribe to a news feed in Atom format and download conference dates in iCalendar. It's all very Web 2.0.
Inspired by Dave Snowdon's Cynefin methods we used story-telling to facilitate the initial, brainstorming phase of the workshop. Steve and I kicked off by each telling a story of something that went wrong on a project that could have been avoided by a bit more forethought before development started. We then asked the participants, who were seated cafe-style, in small groups, to share stories about things that worked well on a project because the right support had been put in place, or things that didn't go well but could have been avoided with some earlier preparation. While they told stories, each table recorded good and bad issues on green and red index cards for later analysis and discussion and presentation in poster form.
My story had a technical bent while Steve concentrated on the business side of things. I told of when I was called in to help with an emergency fix to a production system only to find out that they didn't know what version of the system was running in production, couldn't run the system in a test environment without corrupting production data, didn't know that the system in the test environment had been corrupting production data, couldn't build the system correctly without laborious, error prone manual patching, and that they didn't even know where all the code of the system was. The code wasn't all in the source repository because, they explained, "there's no point in checking in code until you know it runs in production". If an automated build and deployment process had been in place before work started on feature development debugging would have been much easier or, I suspect, never been necessary.
The story telling seemed to work well. Discussions started quickly; there was no awkward muttering as people worked out what they were meant to do. From what I could tell as I wandered between the tables, everyone stayed focused on the topic and each table produced a lot of cards. A lot of cards: I had to pick them all up after they had been spatially sorted on the floor. Next time I run the workshop I'll get the participants to do that job.
Sorting the cards
Drawing a poster
The posters on display
November in London is the time of frosty mornings, rainy evenings, fireside drinks in cosy pubs and XP Day. As ever, the byline is "More than XP. More than one day." There are two days and three tracks of sessions on all sorts of topics, an open space where anybody can hold forth or create a poster and two fascinating keynote speakers, Tim Lister and William Gaver. XP Day sells out early, so if you want to come you'll have to register soon.
This is my second (and last) year as co-programme chair. Last year the submission and review process was a lot of work — presenters submitted session proposals to the chairs by email and the whole programme committee read and reviewed them before the chairs collated the reviews and responded to every submission. This year I was determined to do something to reduce the workload.
Luckily I met up with the organisers of Benelux XP Day at SPA who suggested using a wiki for both submissions and reviews, and using a fully open peer-review process in which everyone who submitted sessions also had to review those submitted by others. This mostly worked very well. The only drawback was that presenters from non-technical backgrounds were put off by having to use wiki markup. I can see their point; I don't like it either. Next year we might try to use a WYSIWYG wiki engine or let submissions be uploaded in Word format.
I think the system worked especially well during the review phase because each review was short, didn't need any markup and the wiki engine had explicit support for discussing articles. We gave every potential presenter 5 votes that they could assign to other sessions as they saw fit. We chose this scheme because we expected that people would only give favourable reviews because every review is signed by the reviewer and can be read by the reviewee. In practice this is what happened, but because people allocated their votes to different submissions we got a good idea about the relative popularity of different submissions and could concentrate on creating a balanced programme from the most popular.
Another advantage of the new submission and review process is that I've had time to prepare a session for this year. I will be running a workshop on what to do Before Iteration Zero with Steve Freeman. I hope to see you there.
The workshop explored the usability of software components. Secretly it was about reuse, but the term "reuse" has become a dirty word — in my experience software written to be reusable is usually useless and the effort of writing reusable components is mostly wasted.
There's lots of existing software out there and lots of it is being used by lots of different people who are quite able to build applications with it without starting enterprise reuse projects. Instead of "reuse", we think it's more productive to think about what makes a software component useful and how to use software components by actually writing software and reflecting on the experience. We hoped that would give us insight into how we can build systems from existing software and better write pieces of software that other people can use.
It certainly did.Participants were given three 90 minute challenges in each of which they they had to work in pairs to write a program to perform some non-trivial task. The tasks were too large to write from scratch in the time available, so pairs had to solve the challenges by integrating useful software that they could find on the internet. The three challenges were:
The participants were excellent. Different pairs managed to get a solution to all three challenges. The group had a different backgrounds and used a variety of technologies so that both successful and unsuccessful attempts generated a lot of interesting comparisons.
I expected that the winning solutions to use a dynamic language to compose objects from well-packaged libraries and maybe use some REST style web services. That's not exactly what happened.
Here's my interpretation of what seemed to work well:
Examples over Documentation: Attempts to use components were helped not by documentation but by the availability of example code that could be copy-and-pasted into the application and then tweaked to fit the situation. In some cases documentation was actually a hindrance. Misleading documentation found through Google led one team down a blind alley trying to use an inappropriate library that didn't actually do what was required. Another team tried to use a component that had huge amounts of well written documentation that was entirely useless and so slowed them down because they had to wade through a lot of text before finding out that they were wasting time. A team using components that had been developed test-first found the tests to be useful both as documentation and a good source of example code.
Source Code over Binary Components: The ability to look inside a component to understand what it did was a big help. No matter how well documented components were, participants turned to the source when it came to the crunch.
Loosely Structured Data over Highly Structured Data: Two out of the three winning solutions used Unix pipelines and Python's basic I/O and string processing APIs to slice and dice data. Attempts to use highly structured data, such as XML did not work well. In fact, one team explicitly removed markup from input data to make it easier to parse with simple string processing functions. However, one pair failed to get far with Visual Basic because they had difficulties connecting two components that each used a different binary representation of text strings. The sweet spot seemed to be semi-structured text with no explicit markup and a common underlying encoding. I think there are two reasons for this. Firstly, if the structure of data that is required by consumers is not exactly the same as that provided by the producer, the structure just hinders extracting the required information. Secondly, when working iteratively it's a big help if you can look at data that you have to process and visually see the structure. Markup usually obfuscates the structure so that you have to render the document before you can see it.
Dynamic Typing over Static Typing: Pairs using a statically typed language got bogged down solving type compatability problems, getting the right versions of class libraries installed and so forth. Pairs using dynamically typed languages were able to experiment with their partially implemented programs and grow their solutions bit by bit as they learned about the problem and technologies.
Focused Components over Frameworks: It was easier to combine components if they were self contained, did one thing well and did not expect the application to be designed a specific way.
Compose Components over Modify Existing Applications: It was easier to compose focused components into a solution with a bit of scripting glue than it was to take an existing application that seemed to do most of what was required and then try and change it fit. Modifying an existing application required understanding how the entire application worked, but writing some glue code that coordinated components was simpler because each component was simple and programmers didn't have to understand how all the components worked.
Rich Component Library over Programming Tools: Solutions using Unix pipelines beat solutions using Java or Visual Basic because Unix (or Cygwin) already comes with a huge set of existing components. It didn't matter that the Unix programmers had to use Vi and so didn't have all the assistance that Java IDEs provide.
Actual Capabilities over Intended Use: The successful solutions involved quite a bit of lateral thinking. Components were used for what they could do, not for what the auther intended them to be used for. For example, one solution used a terminal-mode web browser not for interactive browsing but to strip HTML tags from a document to make it easier to extract required data from the text.
Simplify the Problem over Functional Areas: A successful approach was to use components to simplify the problem to the point that it could be addressed with a bit of custom code. A textbook approach to design would be to divide the program into modules that perform easily identifiable tasks. For example, a program to solve a Sudoku puzzle on a web page might be designed as four modules that individually download the HTML document, parse the puzzle from the HTML, solve a Sudoku puzzle and display the solution. The successful solution to that puzzle instead used the terminal-mode web browser to both download the HTML and render the HTML into raw text. The web browser component straddled two functional areas and made it easy to extract the puzzle from the web page with simple string manipulation functions.
Obviously this is not an exhaustive list. It is also biased by the choice of problems, the mix of participants and the small size of the workshop. I'm sure that an experienced Java programmer who had a lot of class libraries installed on their machine could have solved the problems in the time given. We're going to run the workshop a few more times to better understand the results.
I've just got back from OOPSLA 2005. It was held in a soulless conference center in San Diego that was, strangely, shared with a couple of quite scary fundamentalist christian groups who publicised their predictions about when Jesus was going to return and sit on the British throne in magazines left around the foyer. It was odd to be surrounded by two groups of people who clung zealously to their strange ideas in the face of incomprehension from the general public. But of course there's more to OOPSLA than Smalltalk and Common LISP.
Maybe it was the sessions I picked, but retrospection seemed to be a common theme at this year's conference. Linda Rising ran a retrospective looking back over the last 20 years of OOPSLA conferences. Grady Booch described his architecture archaeology, digging through past and present systems in a mammoth project to create an engineering handbook of software architecture and with the Computer History Museum to preserve the source code of historically interesting software. He likened his work to that of the victorian gentleman scientists who collected and collated large numbers of specimens that allowed future generations of scientists to create and confirm their theories. Martin Fowler similarly described his career as digging through systems finding the essence of what worked in practice and bringing those ideas to a wider audience. In the Onward! lightning talks Dave Thomas gave an impassioned criticism of our community for failing to learn and pass on fundamental ideas and techniques. Brian Foote spoke of the need for pattern paleontology, recoding patterns that are now fossilised within our programming languages and in a talk entitled "I Have Nothing to Declare But My Genius" he looked back over his experience with object oriented languages, particularly Smalltalk, and why it taught him that static typing is not useful for object-oriented programming.
Even in the Scrapheap Challenge workshop winning solutions used a mixture of the old — Unix-style pipes and filters — and the new — Greasemonkey scripts and REST web calls. But more of that in another article.
Of course, there was a lot of new stuff presented as well. I missed Jonathan Edwards' presentation on Subtext which I was told was the highlight of the conference. Instead I attended the dynamic languages symposium where Marcel Weiher gave an engaging presentation of Higher Order Messaging, which I've been experimenting with recently. His presentation had the amusing effect of needling a Common LISP programmer who proclaimed that "messaging is bunk" but that CLOS did it all years ago anyway. Higher Order Messaging was also discussed at RubyConf, or rather in virtual chat that went on during RubyConf over IRC. Other new stuff included a demo of LINQ. Interesting stuff but I'm not completely convinced by the end result. It seems a bit of a mish mash of object orientation, procedural and functional programming and, unfortunately, the new syntax improvements are limited to queries and not opened up for general use by programmers.
That's all for now. I'll summarise the results of the Scrapheap Challenge workshop soon.
I've just returned from the SPA'05 conference where Steve Freeman and I gave a presentation on Embedded Domain Specific Languages in Java. The best aspect of the conference is the mind-stretching conversation that occurs over coffee, dinner, beer and scotch until
late at night early in the morning. Here are a few of the thoughts, questions and wierd facts that emerged over the four days:
* Intentional programming and intentional computing look like being important new ways of thinking about computers. But what does "intent" really mean?
* Could Stockholm Syndrome help us manage user expectations?
* Software should be turtles all the way down.
* Semiotics. How can we use it to think about software? And what is it anyway?
* How much syntax do we really need? Or... how little syntax can we get away with? We need a syntax liberation front. Rise up! You have nothing to lose but your tool chains.
* If a round-trip modelling tool can generate runnable code from a model and a model from runnable code, is it even a model any more? Isn't it just another representation of the code? So why call it a model?
* Pigs can lie.
* Creating vapourware is much harder than I expected.
* Why on earth didn't I learn to program in OCaml before?
The presentation did not go that well. We only had 20 minutes to talk and with that little time could not really show anything in action. Instead we gave a hand-wavy presentation with a made up example application based on a tea shop. I think the example was too artificial and the talk too high level to really show the techniques in action. Joe and I had the same problem with our example when we gave a mock objects demo at OT2004. The format of the presentation was uncomfortable for me too: I was stuck behind a fixed mike but I much prefer to walk around the stage and
gesticulate like a loon engage the audience.
The demo went much better. We fired up the IDE and ran a few TDD iterations using mock objects to explore the design of some objects in our tea shop example. I think showing the library in action when coding really helped get some of our ideas about the larger process across to the audience. Also, I had a clip on mike and the audience was smaller, so I could adopt a more dynamic style and interact directly with the audience. It was great to meet some happy jMock users after the demo and chat with them about their experience with the library. However, the tea shop example was not perfect and served to confuse the message a little.
Duncan Pierce has had a lot of success using video games as example domains when teaching programming and design techniques. I always write a video game whenever I learn a new language or platform because a reasonably entertaining game exercises a lot of different features: event handling, graphics, distribution, file I/O, timing, etc. etc. Perhaps a video game would work much better as a demo for jMock. After all, it was a commercial video game project that drove me to write the forerunner of jMock and experience on that project, among others, informed the design of the current jMock API.