John C. Reynolds passed away recently. Lambda-the-Ultimate reflects on his scientific life.

Vijay told me of his passing. Sad I never saw him speak.

## Shoham, Leyton-Brown: Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations

Sat, May 16, 2009./cs | LinkThis book looks fantastic. Social choice, formal epistemics, game theory, so on and so forth. I really need a few spare lives right now.

What makes it even more fantastic is that Cambridge University Press has allowed them to distribute it as a free eBook too, making them surely the most progressive academic publisher.

OK, take a deep breath. Look at this page and tell me the world isn't crazy.

Say you want to talk to the world in Unicode, but you want to do
it *quickly*. Well, obviously you're going to draft C's
`atoi`

and friends to convert numerals to your internal
integer type, right? That's great in theory, but when your code is
running on someone else's webserver that you know little about, things
might get a little tricky.

Haskell's FFI specifies that the functions in the
`CString`

module are subject to the current
*locale*, which renders them unpredictable on the hitherto
mentioned webserver. I can imagine a numeral encoding that
e.g. `strtol_l`

understands with the locale setting of
today that it fails to understand tomorrow. I don't think there are
enough manpages in all the world to clarify this problem.

Solution? Use integers only for internal purposes, like user
identifiers, render them in ASCII, and use Unicode strings for
everything else. Don't use the `CString`

module, carefully
unpack UTF-8 `ByteString`

s into Haskell
`String`

s, and don't expect warp speed. If you're (cough)
putting this stuff in a library, hope like hell your users don't try
anything too weird.

One day someone will resolve all the issues of implementing a proper Unicode I/O layer, and I will thank them for it.

Any idiot knows a tree has a branching structure, or at least that's what we've been telling the first years since time immemorial. There was a proper CS tree at UNSW back before I got there that the internet doesn't appear to remember so well. This photo of the "right way up" fig is from Geoff Whale's data structures textbook, and carries the following attribution:

Evidence that trees sometimes really are as shown in data structure textbooks. Sculpture using the medium of a dead fig tree at UNSW. Photo by Russell Bastock.

I reproduce it here without permission. That must be the Applied Science building in the background and my guess is that it sat at the eastern end of where the Red Centre is now.

I spotted this purported "tree" on Centennial Avenue, near Avoca St (which runs between Queen and Centennial parks) several months ago but have only now got around to photographing it. I grant that it appears to be a DAG, and is therefore for most purposes a tree.

...and my only excuse is that so many others have made the same mistake. To atone I present here law 11, so familiar to those who deal with modern universities:

The bigger the system, the narrower and more specialized the interface with individuals.

The most-recent edition has been renamed to The Systems Bible. I caved in and ordered a copy from Amazon. True to form, it won't be here until sometime in March.

I've been trying to acquire a copy since reading an excerpt in the knowledge book with some intriguing assertions about general systems. Alibris has screwed me around twice but now someone has put it on the 'net.

Most of the workshop was quite technical, as I expected. Andrew Tridgell gave a good talk on his appreciation of the GPL, and Eben Moglen was quite amusing while arguing multifarious legal details.

The CSIRO finally got a decision on the legitimacy of their wireless patent in the US, and perhaps it will solve their perennial funding crisis. As this is about hardware protocols I'm not altogether sure it's a terrible thing, being close to the original intended use of patents and all that. Anyway, I'm all for another Microsoft-esque tax on corporate America. Let's just hope they use it for more than beer and skittles.

Courtesy of Manuel.

After finishing this book it struck me that it's helplessly Americentric; such giants of the field as Thierry Coquand and Gérard Huet, the entire COQ project, the notion of a "logical framework" (thank you Gordon Plotkin and friends), and sundry other influential things fail to rate a mention. (OK, Per Martin-Löf gets a guernsey as the godfather of neo-constructivism, though the entire Swedish-west-coast movement he inspired goes unremarked.) The problem of having to manually provide variable instantiations (due to the undecidability of higher-order unification) is alluded to (p270), but where is the follow-up pointer to the pragmatic resolution used in e.g. Isabelle, with linear patterns, stylised rules and all that?

The continual reference to the issue of proof and to mathematical practice as normative or indicative is misleading; the issue is how to engineer computer systems as rigorously as we do other artefacts, such as bridges and planes, and most of the epistemological issues involved are not unique to computer science. Similarly we need not think of these proof assistants as oracles so much as mechanisations of other people's expertise, in much the same way that structured programming and libraries (or even design patterns if you like that sort of thing) guide us towards best-practice, and employing algebraic structures leads to all sorts of nice things, like compositionality.

Putting it another way, why would one ever think that a computer system can be engineered to be more reliable than a motor car?

Still, the book does have some interesting discussions on just what a proof is. John Barwise's conception of formal proof as an impoverished subclass of mathematical proof rings true to me, though the particular example (p324):

... [C]onsider proofs where one establishes one of several cases and then observes that the others follow by symmetry considerations. This is a perfectly valid (and ubiquitous) form of mathematical reasoning, but I know of no system of formal deduction that admits such a general rule.

is a bit weak: in Isabelle, one could prove a case, prove that the others are in fact symmetric variants of it, then draw the conclusion. If the symmetry proof were abstract enough it could be enshrined in a library.

I stole this from Tim's shelf while he wasn't watching. I find it really patchy, perhaps because the main reason I picked it up was his potted history of proof assistants (Chapter 8). Here's some notes:

### Chapter 6: Social Processes and Category Mistakes

I'm quite partial to James H. Fetzer's position that claiming computer programs can be verified is a category error. (I remarked to Kai a few weeks ago, self-evidently I thought, that proof at best assures us that our reasoning about an artefact is sound, and that the disconnect between what we talk about and the actuality of putting the artefact in the environment can only be bridged by testing, an issue our breathren in the empirical sciences have been ruminating about for centuries.) Note that I am firmly of the opinion that formal proof of just about anything is good if one can get it, the effort of comprehending what's being said aside. This is about the epistemology of computer science, of examining the so-rarely articulated general issues of moving from theory and practice.

My impression is that his position is dismissed (see, for example, this terse rebuttal by the usually erudite RB Jones) as being a category error itself; software-as-mathematical-artefact, hardware-as-physical-process, read-the-proof-to-see-what's-proven. In context here, though, Fetzer's claims are set against the blue-eyed optimism of those who want to make the correctness of their system contingent only on the laws of physics (p236):

"I'm a Pythagorean to the core," says Boyer, "Pythagoras taught that mathematics was the secret to the universe." Boyer's dream — not a hope for his lifetime, he admitted — is that mathematical modeling of the laws of nature and of the features of technology will permit the scope of deductive reasoning to be far greater than it currently is. "If you have a nuclear power plant, you want to prove that it's not going to melt down ... one fantasizes, one dreams that one could come up with a mathematical characterization ... and prove a giant theorem that says 'no melt-down.' In my opinion, it is the historical destiny of the human race to achieve this mathematical structure ... This is a religious view of mine."

Unusually for me I'll say no more until I've read Fetzer's paper.

### Chapter 8: Logics, Machines and Trust

- MacKenzie doesn't say anything much about the logic employed in PVS; this is another point of departure from the Boyer-Moore theorem prover (beyond the approach taken to automation). My understanding, from Kai, is that it uses a version of higher-order logic.
- Cute: there's a "mathematically natural" example of a statement that is true but does not follow from the Peano axioms (cf Gödel incompleteness). Check out the Paris-Harrington theorem.

More as I get around to it.

How cool is this, someone's organised a family tree of Linear Temporal Logic (LTL) translations. Who would have thought there's so damn many.

Tim told me about this talkfest. I've only watched half of Amir Pnueli's presentation so far, and it seems to be mostly a recapitulation of old, old stuff (cf Kai's presentation in his undergraduate concurrency class).

Anyway, the big problem is the downloadable videos don't include the slides, so it's pretty tedious trying to follow what's going on. I'm told the streaming versions don't suffer from this.

This book has an excellent introduction to complexity, especially
*space* complexity. Thanks mrak.

The copy in the UNSW Library has already been nicked, it's that good.

Professor Comer at Purdue shares his searing insights into the social dimensions of computer science.

Gerwin has been busy again.