It is a nice easy-going undergrad reading exploring rubber and its properties from a thermodynamic point of view. The authors apply the theory to rubber balloons to familiarize reader with such concepts as stability, bifurcation, hysteresis, phase transitions and probably a couple of others I missed. Huh, you wouldn’t expect all of these to be observed with a bunch of simple party balloons, would you?

Once again the book is very accessible in terms of math machinery used, making it perfect as a supplementary reading for students interested in thermodynamics and taking their course in general physics. The equipment required to reproduce the experiments also doesn’t seem to be very sophisticated allowing for some neat demonstrations.

For the details I refer you to the book. Apart from rubber balloons, which are no doubt fun, the rubber itself is a very interesting material. It were its remarkable properties which made me at last find some time and read a book about rubber.

The thing with rubber is that the elastic forces you experience are entropic, that is when you stretch a rubber band you (roughly speaking) do not increase its internal energy, you decrease its entropy. That’s because rubber molecules are long twisted chains and when you expand rubber you straighten them, thus ordering (decreasing their entropy). A simple kinetic theory of rubber based on entropic reasoning is presented in the book. For quick introduction on rubber thermodynamics I suggest you John Baez’s post about entropic forces.

However the explanation based on entropy (or number of available states) may seem unsatisfactory to you because it doesn’t directly provide a visual atomistic/mechanical explanation for elastic rubber forces. Meanwhile the picture is quite simple. Imagine a long chaotically wobbling chain. Obviously, it cannot be straight if it wobbles and more it wobbles more twisted and short it becomes. The chain is a rubber molecule and wobbling is thermal motion. From this picture it is also clear why rubber shrinks when heated.

To assist the explanation I have prepared an animation featuring a model of a single rubber molecule, it is available as .mp4 and .gif (gif may have problems with fps). The modeling was done in Step.

]]>

inf 1 ==== / n n \ [ x log (x) > I ---------- dx / ] gamma(n x) ==== / n = 0 0

That’s the output of Maxima. Some of the systems went further and don’t restrict themselves to plain ASCII. Axiom can produce such a nice output:

2 x - %A ┌──┐ ┌┐ %e \│%A │ ──────────── d%A └┘ tan(%A) + 2

Even now most CAS’s retain command line interface, for example Mathematica 8’s terminal session:

In[14]:= Pi*(a+b^2/(Exp[12]+3/2 ">2)) 2 b Out[14]= (a + -------) Pi 3 12 - + E 2

The same technique was also spread in Usenet see whim.org for a collection of notes some of which appeared at `sci.math`

. I was even able to find “Guidelines for Using ASCII to Write Mathematics”.

If you are looking for a standalone application to render LaTeX to ASCII the only one I’ve found is `tex2mail`

, more than a decade old Perl script. Much water has flowed under the bridge since then and old character encodings were replaced by Unicode, namely UTF-8. Unicode provides lot’s of math-related symbols and naturally one wants to employ this abundant to allow for prettier output of math formulas in terminal.

I’ve tried to enhance tex2mail by turning it to tex2unicode. To some extent I’ve succeeded:

┌──────┐ ┌─┐ 3 4 │ 2 6 4 ⌠ \│a x ┌─┐ 3 x \│1 - x + x - 3 x ⎮ ───────── dx = \│a ────────────────────────────────── ⌡ ┌──────┐ ┌──────┐ │ 2 ⎛ 2 ⎞ │ 2 2 \│1 - x ⎝ 3 x - 12 ⎠ \│1 - x - 9x + 12 ⎡ 1 ⎤n lim ⎢ 1 + ─ ⎥ = e n --> oo ⎣ n ⎦ n n ⌠1 x ──┐oo ⌠1 x (log x) ⎮ x dx = > ⎮ ────────── dx. ⌡0 ──┘n=0 ⌡0 n! ┬─┬oo ⎛ 1 ⎞ ⎛ ┬─┬oo 1 ⎞-1 1 1 6 │ │ ⎜ 1-── ⎟ = ⎜ │ │ ───── ⎟ = ───────────────── = ──── = ── ≈ 61% ┴ ┴p ⎜ 2 ⎟ ⎜ ┴ ┴p -2 ⎟ 1 1 ζ(2) 2 ⎝ p ⎠ ⎝ 1-p ⎠ 1 + ── + ── + ∙∙∙ π 2 2 2 3

Unfortunately things turned out to be more complicated with other LaTeX commands. The problem is that some symbols like ‘⟶’ *should* be wider than others. It undermines the whole concept of monospace formatted output. To make things worse the output depends both on the font and the software. Gedit and gnome terminal treat the same string differently. Actually the only reliable symbols to enrich ASCII art are form box-drawing Unicode block.

Making a long story short. Since I don’t really need a full featured LaTeX-to-Monospace-Unicode renderer, I’ll just post what I did by now:

If you are not content with certain LaTeX commands, for example ‘\otimes’ or ‘\to’ it is *really* easy to improve them. So don’t hesitate and adjust it for your needs. The license is whatever it was for tex2mail.in file in PARI/GP project, where I took it from. I guess it is GPLv2.

]]>

SklogWiki is an open-edit encyclopedia dedicated to thermodynamics and statistical mechanics, especially that of simple liquids, complex fluids, and soft condensed matter.

Sounds promising! At least for me, I’m a fan of these areas and thus I couldn’t pass this wiki up several years ago. Since then I’ve been using it quite often and learned about several things from it, including for example non-extensive thermodynamics. Another curious thing, that would be interesting for any physicist is concealed in SklogWiki’s logo:

What does this mean? It is the short-hand used by James Clerk Maxwell for the word **th**ermo**d**ynami**cs**. Pretty neat, isn’t it? You can read a bit more about SklogWiki’s logo in the about section.

So, if you are interested in thermodynamics or statistical physics (mostly classical) then I suggest you to roam about this wiki. If you are capable of contributing to it, then **be bold!**

Oh, I can’t help posting links to SklogWiki’s pages: phase diagram in ρ-T plane. Most people are familiar only with p-T phase diagrams, but studying ρ-T version can enhance your understanding of the phase transitions/coexistence.

]]>

**Why does a tiger have stripes and a lion doesn’t?**

One might expect that the explanation is written within the fundamental building

blocks which these animals are made up from, so one could take a big knife and open the lion’s and the tiger’s bellies. One finds intestines, but these are the same for both animals. So maybe the answer is hidden in even smaller constituents. With a tiny knife we keep cutting and identify a smaller kind of building block, namely the cell. Again, there is no obvious difference between tigers and lions at this level. So we need to go even smaller. After a century of advancing `small knife technology’ we discover DNA and this constituent truly reveals the difference. So yes, now we know why tigers have stripes and lions don’t! Do we really? No, of course not. Following in the footsteps of Charles Darwin, your favorite nature channel would tell you that the explanation is given by a process of type

which represents the successful challenge of a predator, operating within some environment, on some prey. Key to the success of such a challenge is the predator’s camouflage. Sandy savanna is the lion’s habitat while forests constitute the tiger’s habitat, so their respective coat blends them within their natural habitat. Any (neo-)Darwinist biologist will tell you that the fact that this is encoded in the animal’s DNA is not a cause, but rather a consequence, via the process of natural selection.

This example illustrates how monoidal categories enable to shift the focus from an *atomistic* or *reductionist* attitude to one where systems are studied in terms of their interactions with other systems, rather than in terms of their constituents. Clearly, in recent history, physics has solely focused on chopping down things into smaller things. Focussing on interactions might provide us with a complementary understanding of the fundamental theories of nature.

In my opinion the reasoning above is brilliant. However note that by no means it implies the reductionist approach should be abandoned. The fact that objects can be dissected into their constituents is as true as the fact that the whole is more than the sum of its parts. The cited passage just demonstrates that many people (like me) are really not in favor of *dominating* role of reductionism in our worldview.

]]>

Why is this chapter the one to read? Some say that category theory is the language to describe (complex) systems, particularly in physics. Although “Rosseta Stone” provides some clues for such a confidence, it is mainly bound by the realm of quantum mechanics. Meanwhile “Categories for the practising physicist” seems to go further and takes into consideration physics in general. Moreover, it provides references to other interesting stuff, like usage of category theory in biology.

The nice feature of category theory as a language for physics is that it can be easily taught to a computer. And I bet Haskell to be a perfect tool for that. Here one may recall that in Haskell category theory is already used for modeling various systems. I’m talking about functional reactive programming, which is employed roughly to describe systems exchanging signals. This includes user interfaces, games, robotics etc. It is usually casted with the help of Arrows or Freyd-categories (whatever it means). Remember though that the approach of FRP differs from the interpretation of physics given in “Categories for the practising physicist”.

I’m ending the post with something completely unrelated — a photo of the Apollo 10 crew walking to a launch complex:

]]>

So consider a permutation

You could find it’s number of inversions to be 8. Arguably the common method for such a calculation is more suitable for a computer, rather than a human. It is so unnatural to me that I won’t even formulate one. However the following picture should be self-explanatory:

The number of inversions is just the number of intersections of the curves. However, there are some rules to be followed while drawing such diagrams. But again, I won’t bother myself formulating them. I simply state that the following picture is *wrong*:

No doubt you got the idea in less than a second. Obviously, a common algorithm is well suited for computers and calculation of inversions is really a job for a computer. But for teaching humans it not so good.

]]>

As usual, if you find some bugs or want to request a new feature or just have questions, I’ll be glad to help.

]]>

Moreover I bet you know that the maximum efficiency is achieved by a Carnot heat engine, for which the efficiency is:

You should also be aware that Carnot heat engine implies the processes involved to be reversible, which by turn means that the engine proceeds infinitely slowly. But if you are like me, chances are you haven’t made the final step to conclude that such engine is absolutely useless! Indeed, if it operates infinitely slowly then it has zero power. Since heat engines are made to do some actual work, zero power is not what we are looking for.

Thus we need a simple model which would take into account the finite time required by an engine to perform a cycle. Fortunately there is such a model by Curzon and Ahlborn. For an introduction to a finite-time thermodynamics I refer you to a book *“Understanding non-equilibrium thermodynamics: foundations, applications, frontiers”* by Georgy Lebon, David Jou and José Casas-Vázquez. The rest of the post follows it.

In essence Curzon and Ahlborn heat engine operates in a Carnot cycle, with two isotherms and two adiabats. However, the heat flux between various systems (cooler, working substance, heater) is governed by Newton’s law:

It is also assumed that these heat transfers are the only source of irreversibility in a system and their combined duration almost equals to the length of the whole cycle. Leaving aside calculations, we get for the power:

the notation used is explained in this figure:

To find the maximum possible power given fixed and one have to solve the following system:

If you are patient enough, you’ll ultimately get the efficiency, corresponding to :

Surprisingly, the result does not depend on or . As an illustration of differences between Carnot’s result (1) and the result of Curzon and Ahlborn (2) I’ll quote *“Understanding non-equilibrium thermodynamics”*:

As an illustration, consider a power station working, for instance, between heat reservoirs at 565 and 25 °C and having an efficiency of 36%. We want to evaluate this power station from the thermodynamic point of view. It follows from (1) that the Carnot’s maximum efficiency is 64.1%; our first opinion on the quality of the power station would be rather negative if Carnot’s efficiency is our standard for evaluation. However, according to (2), its efficiency at maximum power is 40%. Thus, if the objective of the power station is to work at maximum power output, it is seen that the efficiency is not bad compared to the 40% efficiency corresponding to this situation. We can therefore conclude that, to have realistic standards for evaluation of an actual heat engine, one needs to go beyond Carnot’s efficiency and to incorporate finite-time considerations.

If you got a taste for finite-time thermodynamics, I suggest you to read the book. Keep in mind, that power maximization might be not the only goal, for example one probably would like to minimize pollution or running cost of the plant. These topics are also discussed in that book.

I’ll conclude with a thought (or a question) of my own. It is obvious, that power, unlike efficiency, is an additive quantity. Hence one could combine several (let’s say *n*) low power, but efficient heat engines to produce a single engine. Roughly, the efficiency of such engine would remain high, but it’s power would increase *n* times. So I wonder is that the reason why to produce a powerful car engine one adds cylinders instead of increasing a power production of a single cylinder?

]]>

Usually I find it quite pointless to publish a post containing just a link. But not this time. The link I’m presenting (ta-ta!) is http://johncarlosbaez.wordpress.com/ . I was combing google blog search for GENERIC formalism when I came across the aforementioned blog.

To make things clear the physics of complex systems is the only thing I care about. All that stuff about Haskell, symbolic computations or math in general is nothing more but my way and tools to the theory that would describe the world around me. Indeed geophysics textbook is one of my favorite books. And of course I’m concerned with what is happening to my world, my planet.

That’s why John Baez‘s blog caught my attention. The author is a rather famous physicist who has a relevant knowledge in the field of complex systems and who may share the views on the environment similar to mine [1]. Thus I’m going to spend quite a time reading his blog and works.

[1] There is even a good documentary film by Ed Watkins “Into The Cool” (password: Montana). Amazingly as far as I understand this documentary was part of Ed Watkins’ MFA. If you want to introduce your friends who are not very good at physics to the thermodynamical view on our environment, just show them the film.

]]>

With this newsgroup I learned about lots of interesting stuff I wasn’t even aware of. Among it are Rubi (Rule-Based Integrator) and a poor’s man integrator. I really like the picture on the latter site. I’ve also found some good references to articles and books, for example a good introduction to expression simplification.

Google Groups’ web interface to Usenet doesn’t filter spam. To cope with this difficulty I use Pan newsreader. As a server I use http://www.eternal-september.org/ Here is how Pan looks:

Unfortunately I couldn’t find a server which would provide 20 years archive of the newsgroup, 5 years at most. Besides Google Groups sci. math.symbolic can also be read at mathforum.org

To sum up if you want to write your own CAS sci.math.symbolic is the point to start.

]]>