python


I had been having trouble upgrading to the latest ipython (0.9.1) through macports. In fact, at this point I can’t even access the site, so I’m not sure what’s up. At any rate, the upgrade would fail only because the file ipython-0.9.1.tar.gz couldn’t be downloaded (well, I guess the source file is somewhat important!). So to get it to work I kind of had to brute force it by manually downloading the file (I googled it) and putting it in /opt/local/var/macports/distfiles/python.

Then I ran:

sudo port upgrade py25-ipython

and it went just fine.

At this point just a small update to say that since the activation fiasco, I have not used Matlab at all. Everything that I have done for the last four presentations has been done with R and Python, and I am both the happier and wiser for it.

I have been playing with JMP a bit, but honestly, it’s a bit too “high level” and while it’s neat for data exploration, it really made me nuts that two weeks after I got a license, they were *offering* me a special deal on the impending upgrade. Nothing like dropping a bundle on instantly outdated software. Way to go guys.

So for me, it’s been a pleasure to use R, Python (with iPython of course), and ferret on my Mac Pro. If I ever take a break from playing on the computer I’ll post up what I installed on the new Mac in terms of scientific software.

Well,

I broke down and jailbroke my phone last night. Partially it was just to try it, but also because I was getting sick of the subpar 3rd party apps that were inundating the App Store. After following the instructions via Lifehacker to install Cydia, I was able to install OpenSSH as well as other cool things, like Python.

Then I saw that you could install iPython on the iPhone so I thought, let’s try it.

So hard was it? With the python package installed it was

easy_install ipython

Seriously.

To lay it out in terms of steps…

1. Install Cydia (The only caveat here is that I got a different SHASUM when I checked the pwnage tool from the macgeekblog site, I then redownloaded from the pwnage mirrors)
2. Follow the instructions to get openSSH up and running.
3. Go into Cydia and under “sections” got to “scripting”. There they have Python (among others).
4. I also installed a terminal
5. Now you can either go in through the terminal on the iPhone(touch) or SSH in from a differnet computer. Either way, su to root and then you can
6. easy_install ipython

Next of course would be to install Numpy and do folding at home (I’m kidding!), but this just shows some serious possibilities.

Did I also mention that I installed the NES frontend which can use all the public domain ROMs that are out there? Someone mentioned ROM world and The Old Computer but I haven’t checked them out yet.

Cool stuff.

Blogged with the Flock Browser

Tags:

OK, another night, another trial. I must say, tonight was a lot more fun than the last couple of nights, because I really felt that I learned something, which is really the whole point of this exercise. So the example I was trying to code tonight is a simple EOF of a 3D data series. This is something that I just had to code up at work today, so it was a perfect chance for me to try out Sage. For work I ended up altering an existing m-file and running the EOFs in Matlab, but that’s OK, because now I know what I expect to see after running this in Sage.
The data names have been changed to protect the innocent.

# Load in required modules
sage: from scipy.io.netcdf import *
sage: from pylab import *
sage: from scipy.stats.stats import nanmean
sage: import datetime

#Load data from NetCDF file
sage: ncfile = netcdf_file('file.nc','r')
sage: varnames = ncfile.variables.keys()
sage: varnames

['LONGITUDE', 'TIME', 'LATITUDE', 'DATA']

#Now that I have the order I can load into arrays
sage: lon = ncfile.variables[varnames[0]][:]
sage: lat = ncfile.variables[varnames[2]][:]
sage: dates = ncfile.variables[varnames[1]][:]
sage: raw = ncfile.variables[varnames[3]][:,0:50,:] #I only want 50 records in Y
sage: data = raw.copy() #make a copy
sage: data.shape
(124, 50, 151)
sage: (ncycles, ny, nx) = data.shape

#deal with dates
sage: ncfile.variables[varnames[1]].attributes

{'axis': 'TIME',
'time_origin': '15-JAN-1901 00:00:00',
'units': 'HOURS since 1901-01-15 00:00:00'}

sage: off = datetime.datetime(1901,1,15,0,0,0)
sage: months = ones(ncycles)

sage: for i in range(0,ncycles):
....tdel = datetime.timedelta(days=dates[i]/24)
....td = off + tdel
....months[i] = td.month

sage: ind = where(raw<0)
sage: data[ind] = nan

And here was the first real bottleneck, as things just slowed to a crawl as python tried to find all the instances where the data was less than zero. This is something that is instantaneous in Matlab, and took over 30 seconds to go through 124*50*151 values. There must be a faster way to do this.

data2=data.copy()
#Take out monthly averages
sage: mclim = ones((50,151))
sage: for i in range(1,13):
....index = where(months==i)[0]
....mclim = nanmean(data[index,:,:])
....data2[index,:,:] = data[index,:,:] - mclim

data2.shape = (ncycles, nx*ny)
ltmean = nanmean(data2) #get mean of each time series

#take out long term mean
sage: anom = data2.copy()
sage: for i in range(0,ncycles):
....anom[i,:] = data2[i,:] - ltmean

sage: EOF = nan_to_num(anom) #push land back to zero
sage: [u,s,v] = linalg.svd(EOF)
sage: for i in range(0,ncycles):#build array so that we can project eigenvalues back onto timeseries
....s2[i,i] = s[i]
sage: amp = dot(s2.transpose(),u.transpose()) #get amplitude
sage: spatial = v[0:4,:]# pull out spatial fields
sage: ratios = pow(s,2)/sum(pow(s,2))*100 #get %variance explained for each mode
sage: temp = spatial[0,:]
sage: temp.shape = (ny,nx) #push back to original dims
sage: plot(amp)
sage: savefig('amplitude.png')
sage: imshow(flipud(temp))
sage: savefig('spatial.png')

Success!

I actually really felt positive about this whole example as I really learned a lot more. This also was probably too large of an array to test out (measure twice cut once!) but it’s what I was working with so I wanted a real world example. The more that I worked in sage the more comfortable I felt as well. The geographic projection issue is still there, as well as some indexing speed issues, but overall, I was really impressed with the Sage/SciPy/NumPy experience today. Overall I feel that more of a transition was made for me last night/today. Which was great timing as a co-worker actually called me and asked if I knew of any free replacements for Matlab…

Technorati Tags:
, , , , , ,

This was quite possibly the worst idea for title naming that I could have thought of. Anyway, I played around a bit more tonight, and I thought that I would give an update to the three people that are waiting with bated breath.

Anywho, I decided to continue trying to map the data from the netcdf file onto a projection, and here’s what I ran into.

It looks like the basemap module is installed (as basemap) but that it depends on matplotlib > 0.98 and 0.91 is installed. I tried to be tricky and move my locally installed matplotlib over to the sage/local/lib/python2.5/site-packages directory but then that version of matplotlib needed a newer version of numpy than what was installed. At this point I tried

hostname $> sage -upgrade

to see if updated packages/modules were available. This started a huge chain reaction of downloads and source compiling to get to the latest, greatest versions. This process took exactly 59m10.482s to complete (I know because it told me!).

But once again, I get this error:

sage: from basemap import basemap

ImportError: your matplotlib is too old – basemap requires version 0.98 or higher, you have version 0.91.1

At this point though, it’s not working on either the linux or OSX platforms due to outdated dependencies, so either I need to find another way to plot mapped projections or use something else.

Again, this isn’t a knock against Sage, because I really don’t think that is an ideal test for this software. But honestly, a lot of why I went for this approach was to avoid having to use separate approaches for data manipulation and visualization, and this would be a common task. Matlab’s mapping toolbox is useless to me for plotting, so I end up using m_map, which is still not as good as GMT, but it gets the job done in house.

My main thoughts at this point are that it seems easy to get into dependency hell here, as one module upgrade can force another, and so on. At this point it’s another block of time spent on setup, and no result. Time to stop for the time being.

Technorati Tags:
, , , , , , ,

Part 1 of the sage experience was just installing the software. This was incredibly easy on both OSX and linux (CentOS 5.2 and Fedora 9). For the Fedora 9 install I just downloaded the latest version of Sage which was compiled for Fedora 8, and this seemed to be just fine.

So for me, I really just wanted to be able to do a few different examples which would be close to “real world applications” for me.

Some things that I would like to be able to do in sage:

1. Load in a 2-D NetCDF satellite data file and display it as a map projection. This should be really simple. I would usually just use GMT for this (a small shell script wrapping psbasemap, grdimage, and pscoast).

2. Load in a data series with dates and locations, and match this to corresponding satellite data in time and space. Normally I would use a perl script that I wrote many moons ago to do this. I would basically sort the data, then match a block of data at a time using GMT’s grdtrack function. I know that this is inefficient, and really I would like to be able to pull extra data in x,y, or t and take the mean or median value, which would be more CPU intensive, but better than matching just one point in space and time to the nearest pixel.

3. Load in a multivariate data series and do multivariate statistics (e.g. LME, GLM/GAM, RDA). This is where the R interface would come into play. Normally I would prepare the data elsewhere, then import the flat table into R and use the R functions. This may involve installing more packages (nlme, mgcv, etc).

4. Load in a 3-D set (x,y,t) of satellite data files and perform an EOF analysis on them (akin to SVD in Matlab). Normally I would do this in Matlab or Ferret. I’m just curious how easy it would be to do this here.

There are other things that I could do, but these are a few off the top of my head, and things that I am doing now, so it would be incentive to try Sage out with. For tonight, I’ll just work on #1, which should be really fast.

The data file I’m using is just a NetCDF file (created by GMT) which I can read with pupynere in python. Here I’m going to use the scipy.io.netcdf module (which is actually based on pupynere I believe).

sage: from scipy.io.netcdf import *
sage: from pylab import *

# Read in file metadata to object
sage: ncfile = netcdf_file(‘RS2006001_2006031_sst.grd’,’r’)

# get the variables in the data file
sage: ncfile.variables

{‘x’: <scipy.io.netcdf.netcdf_variable object at 0xb47b08c>,
‘y’: <scipy.io.netcdf.netcdf_variable object at 0xb47b16c>,
‘z’: <scipy.io.netcdf.netcdf_variable object at 0xb47b1ec>}

# Yank out data
sage: longitude = ncfile.variables[‘x’][:]
sage: latitude = ncfile.variables[‘y’][:]
sage: sst = ncfile.variables[‘z’][:]

# just plot sst to test 2D image plotting
sage: plot(sst)
[<matplotlib.AxesImage instance at 0xc03636c>]

Nice, but it’s upside down. Let’s flip it vertically.


sage: clf
sage: plot(flipud(sst))
[<matplotlib.AxesImage instance at 0xb86a2ac>]
sage: savefig('temp.png')

RStest

Easy, but I want to put this on a projection. Normally I would use the basemap tools which are an add on to matplotlib. I don’t see these installed, and I didn’t see them in the extra sage packages on line, so I downloaded them from SourceForge and installed them.

The first step you have to do is to install the geos package, just read the README in the geos folder and hit

./configure
make

and then we get our first epic fail. Something in the geos chain won’t compile, and I’m just about fried enough to call it quits for this evening.

At this point I’ve been playing with this for more than 2 hours, and I still have yet to make a simple map on a projection. There has to be something I’m missing, but at this point I’m going to pause until tomorrow. So not the best testing evening, but there are some positives so far. The bundling of most packages is a plus, and the ease of loading in NetCDF files is nice. Data displays well using the Pylab interface, even though I am still forced to save to a file at this point.

So immediate goals:

1. Get a backend working for viewing plots in widgets (akin to ipython -pylab)

2. Get the basemap tools installed so that I can make a map with a projection!

Technorati Tags:
, , , , , ,

OK, so I spent a bit of time yesterday thinking about some alternatives to Matlab versus the time/money balance of learning a completely new system. Today’s adventure activating Matlab 2008a has me definitely leaning in the “alternative” direction.

A little background on Matlab 2008a and the activation model. With 2008a Matlab has moved to an activation model, where you have to basically register your computer with The Math Works. With an individual license there are two choices you have: lock the license to a computer, or lock it to an individual. As I use Matlab at home, at work, and on my laptop I chose to lock it to myself.

Now Matlab 2008a came out in February 2008, and I’m just trying to install it now, which is all based on my fear of activation, since usually I am eager to download and install the latest version. So I bit the bullet yesterday and installed the OSX (laptop) and linux versions. Usually I get a nice CD package with installations for all platforms but that must have ended as well.

So this morning I figured that I would give installation a go. Usually a 5-10 minute affair, I spent 90+ minutes on this, and it still doesn’t work on the linux box. Basically what would happen is that I would go through the automated steps (generate the file key, license file, download and give info where needed, rinse, repeat) and watch Matlab completely have a hissy fit that either my username didn’t match, or my host ID didn’t match or that I didn’t have the desk pointed northeast (well, not that last one). I did my civic duty and dug around the troubleshooting site, where I found exactly three entries for activation problems. This must mean that I am completely unlucky to both have a problem and to have a non-standard problem that could not be fixed with answers #1-3.

At any rate, time is money right? So I figured I would give tech support a call. I got through on the second try, and actually go a human who was surprisingly helpful. The main problem on both machines was that the host ID generated was from the MAC address of the active internet connection, which is not what the license manager wants to see. So the laptop fix was an easy one after all, just use the MAC address of the primary connection, which was the wired connection (I was using the wireless at the time). In hindsight, yeah, that makes sense, but there’s still too much voodoo involved for me.

The linux box was worse, where I do not have eth0 enabled since I have problems with it, and I use eth1 instead. So while to the operating system there is only eth1 (as far as I understand) the license file wants to communicate with eth0, which of course doesn’t exist in the software space (now I may be way off on this, but this is what I could come up with for now). So my options are to either rename eth1 to eth0, or activate eth0. Well guess what, at this point I’m inclined to just not run Matlab 2008a since I don’t really feel like risking the possibility of borking my internet connection just to get the latest version of Matlab running. So remind me again why I am paying those maintenance fees? Maybe I’m no activation super genius, but should it really be this difficult to get software that you’ve paid good money for to run? Sure, there’s still an open thread with tech support but how much more time do I want to spend on this today? Well that’s easy, none.

Now I may feel differently in a bit and try again. Or I may just drink more coffee and start installing Sage or Enthought and see how that goes. Or maybe I’ll just try to get some work done.

Technorati Tags:
, , ,

I’ve been a Matlab user for 15 years, and over that time period I’ve of course become fairly dependent on it to get things done quickly. The downside? It’s expensive. It’s a pretty penny to buy the base package, toolboxes are extra, and there are recurrent “maintenance” costs each year to get upgrades.

Sure, that’s standard practice, but each year I have to stand up and justify to my boss why we need to pay these costs for our multiple Matlab users in our shop (a multi-user concurrent license is out of the question, don’t even ask). So what’s a user to do?

For years we’ve just bit the bullet and paid the fee, but with options such as R and Numpy/SciPy out there it may be time to loosen the chain a bit. Or maybe not.

A couple of possible alternatives to Matlab and their respective pros and cons:

R

R is a really nice statistical environment which has pretty much become the industry standard, replacing the very expensive S-Plus. It’s easy to install, has an excellent GUI on OS X, and has a ton of community released packages which are usually made during the preparation of scientific papers. There are some downsides, as there can be multiple (sometimes possibly conflicting) packages (e.g. gam vs mgcv) but choice is good, right? The cons for me are that it’s a new language to learn, and even though I write an m-script for everything, I find the scripting in R a bit clunky, even writing in TextWrangler and then hitting CTRL-R to have the SendToR script source the code for me. It’s just something new, and while the built in functions are really nice, the learning curve for coding things is higher, and will it be faster in the long run than just using Matlab?

Numpy/SciPy

The Numpy/SciPy combo in Python is a viable alternative to Matlab, even having a page dedicated to showing you how easy it is to transition from Matlab. As with R, it’s free, and there are a ton of functions available, but there is a downside for me. I’ve successfully installed it on CentOS 5.1 and OS X 10.5, but it was a bit complicated. I know that these are packaged in many distributions, but not in CentOS, and I had to install from either source or .egg files, which isn’t all that tough, but took some time. I’m not writing the 24.3 steps I did to get it installed because honestly, I didn’t write it down and I don’t remember what I did. Next time I promise to list it out! On OS X I did it all through MacPorts on the MP version of python 2.5. Again, it took some massaging to get it all set up since I was using the non-default install of python.

Overall though, the reason for this little diatribe is that while there are alternatives to Matlab, they all involve learning new ways to do things which, after I successfully learn them, may not be faster than just doing it in Matlab. Most of the time I just need to get things done, and the $7/day cost of Matlab may be well worth it if I’m saving more than 10 minutes of time during that day (assuming for a minute that I am earning $42/hour).

I’m rambling a bit here, but these are just questions that I ask myself as I code things up at the desk. For each of these tools has their place, and in terms of maximum comfort and speed, I use each of them for their strengths. The main dilemma is that in a perfect situation I would drop the commercial Matlab for the free/open source alternatives, but at a minimal cost in dollars and time.

Technorati Tags:
, , , , ,

While python comes standard on the Mac, I tend to also install the latest python using MacPorts so that I can get a whole python tree with numpy, scientificpython, ipython, etc. and not touch the default OS X python install.

For the most part this just involves a couple of incantations and setting a couple of aliases in my .profile file

alias python="/opt/local/bin/python2.5"
alias ipython="/opt/local/bin/ipython2.5"

For the most part this works well, and having my PYTHONPATH environmental variable (again set in my .profile) to include my local python module directory works well.

export PYTHONPATH=$PYTHONPATH:/Users/{username}/library/python

One problem that I have is that while vi(m) is my editor of choice, in some cases I like to code in TextWrangler, and I would like to be able to hit Command-R and have the script run. What then happens is that TextWrangler uses the standard /usr/bin/python, which for some weird reason would not read in my PYTHONPATH.

What I finally did was to create a user.pth file and put this in

/Library/Python/2.5/site-packages/

Probably not the most eloquent solution but it worked.

Technorati Tags:
,

A simple problem really, but since I had to look for it I figured I would post it up. I actually got this from Peter Bengtsson’s blog, since he had run some benchmarks on speed.

At any rate, I kept the order preserving one since I like things in order to keep my life simple.

def uniq(inlist): 
    # order preserving
    uniques = []
    for item in inlist:
        if item not in uniques:
            uniques.append(item)
    return uniques

I put this in a unique.py in my ~/library/python directory (which is in my PYTHONPATH) and now I can pull this into scripts with:

from unique import uniq

Technorati Tags:

Next Page »