I spent some time coding on the weekend (which would be considered sad by many, but not by me) and built a simple push button GUI interface that allows the user to refine WOD queries by instrument and month. If I wasn't limited by the X interface, I'd add more SQL type functionality and allow for queries on any field (year, cruise ID, etc). More work in the future I guess. Here's a screen shot of the button bar that allows the user to choose instrument type and month of observation, the top row is for the month, the bottom row is for instrument type. The labels on the buttons switch to UPPERCASE when selected and back to lowercase when deselected. The query is only done after a geographic selection in the map window. I've shown an example of several hundred CTD casts collected in the months of July, August and September in the Northwest Passage in northern Canada, one of my favourite areas that I've ever worked in.
The CTD data is plotted in the main SVP Tool window above, with sound speed, temperature and salinity plotted versus depth (left to right, respectively).
More work? Of course. If I turn on the debug mode of the WOD query engine, I can see that it spends an awful lot of its time unzipping the raw source data for each WMO square that it visits during the query extraction. I'm still looking to speed this up if i can, right now the code fires off a system('gunzip -c file > tempfile'); whenever it hits a zipped WOD file. I'm curious if it would be faster to run through zlib instead? Kurt had also suggested converting the WOD files to a binary format so that's another option that would guarantee to bear some fruit but I really do like the elegance of sticking with the raw WOD files (easy to update, no need to reconvert with each update). More thinking needs to be done, obviously... Suggestions are welcome.
Tuesday, August 10, 2010
Wednesday, August 4, 2010
Downloading WOD
You can download the WOD files from the NODC website through an online query (http://www.nodc.noaa.gov/OC5/SELECT/dbsearch/dbsearch.html), you can also download the entire database if you look hard enough for the links (http://www.nodc.noaa.gov/OC5/WOD09/data09geo.html). I didn't feel like pointing and clicking myself to death so I wrote this script to download the entire thing. I also added a line in the script to compile a summary database of the CTD and XBT data for use in svp_tool.
#!/bin/bash
count=0
mkdir -p {OSD,MBT,CTD,XBT,PFL,MRB,DRB,APB,UOR,GLD}/OBS
mkdir -p {OSD,MBT,CTD,XBT,PFL,MRB,DRB,APB,UOR,GLD}/STD
echo "Downloading the WOD database"
for quadrant in 1 3 5 7
do
for latitude in 0 1 2 3 4 5 6 7 8
do
for longitude in 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17
do
let count++
for type in OSD MBT CTD XBT PFL MRB DRB APB UOR GLD
do
# Get the "observed" data
file=$type\O$quadrant$latitude$longitude.gz
wget http://data.nodc.noaa.gov/woa/WOD09/GEOGRAPHIC/$type/OBS/$file
mv $file $type/OBS
# Get the "standard level" data (observed is filtered and then
# interpolated to standard depth levels)
file=$type\S$quadrant$latitude$longitude.gz
wget http://data.nodc.noaa.gov/woa/WOD09/GEOGRAPHIC/$type/STD/$file
mv $file $type/STD
done
done
done
done
echo got total $count
# Now add a tmp directory for use by the WOD library for unzipping database files
# during extraction
mkdir tmp
# Now build database of CTD and XBT data
echo "Building a CTD/XBT summary database"
wod2db3 -out wod.db3 CTD/OBS/CTDO*gz XBT/OBS/XBTO*gz
echocount=0
mkdir -p {OSD,MBT,CTD,XBT,PFL,MRB,DRB,APB,UOR,GLD}/OBS
mkdir -p {OSD,MBT,CTD,XBT,PFL,MRB,DRB,APB,UOR,GLD}/STD
echo "Downloading the WOD database"
for quadrant in 1 3 5 7
do
for latitude in 0 1 2 3 4 5 6 7 8
do
for longitude in 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17
do
let count++
for type in OSD MBT CTD XBT PFL MRB DRB APB UOR GLD
do
# Get the "observed" data
file=$type\O$quadrant$latitude$longitude.gz
wget http://data.nodc.noaa.gov/woa/WOD09/GEOGRAPHIC/$type/OBS/$file
mv $file $type/OBS
# Get the "standard level" data (observed is filtered and then
# interpolated to standard depth levels)
file=$type\S$quadrant$latitude$longitude.gz
wget http://data.nodc.noaa.gov/woa/WOD09/GEOGRAPHIC/$type/STD/$file
mv $file $type/STD
done
done
done
done
echo got total $count
# Now add a tmp directory for use by the WOD library for unzipping database files
# during extraction
mkdir tmp
# Now build database of CTD and XBT data
echo "Building a CTD/XBT summary database"
wod2db3 -out wod.db3 CTD/OBS/CTDO*gz XBT/OBS/XBTO*gz
echo
echo "Add WOD_PATH="`pwd` "as an environment variable"
echo "Add WOD_DATABASE_NAME="`pwd`"/"wod.db3 "as an environment variable"
exit
Tuesday, August 3, 2010
sqlite3 and the World Ocean Database, continued
I got things sped up quite a bit by directly querying the database file I had created (see my last post) and by returning the query results as an array of watercolumn structures instead of writing them to disk and then reading them in again in my svp_tool.
Here's the bit of code that deals with building the query:
// Form the query: select records based on geo/time bounds and then sort in such a way that
// the hits return in WOD filename order to minimize jumping around between files
sprintf(sql_string,"SELECT * FROM tbl1 WHERE latitude > %.3f AND latitude < %.3f AND longitude > %.3f AND longitude < %.3f AND month in %s AND instrument in %s ORDER BY instrument, observed, quadrant, wmo_lat, wmo_lon, year, month, day;",
wod_reader->query_bounds.latitude_south, wod_reader->query_bounds.latitude_north,
wod_reader->query_bounds.longitude_west, wod_reader->query_bounds.longitude_east,month_list,instrument_list);
if (wod_reader->debug) printf("Doing SQL string %s\n",sql_string);
printf("Doing WOD query...patience\n");
// Do the query and deal with the results
rc = sqlite3_exec(wod_reader->db, sql_string, wod_query_callback, wod_reader, &zErrMsg);
if (rc) error("SQL error: %s",zErrMsg);
return(0);
}
The callback function (wod_query_callback) in the sqlite3_exec call deals with the query results one at a time. Here'swhat's going on in there:
static int wod_query_callback(void * reader_in, int argc, char **argv, char **azColName)
{
int i;
int iend;
int got_new_file;
WOD_Reader *wod_reader;
char * inname = NULL;
int file_offset, num_bytes;
int instrument, observed, quadrant, wmo_lat, wmo_lon;
char type_lowercase[4];
char type_uppercase[4];
memset(type_uppercase,0,4);
memset(type_lowercase,0,4);
wod_reader = (WOD_Reader *) reader_in;
got_new_file = 0;
// Parse the output record from the query
for(i=0; i
{
if (wod_reader->debug) printf("%s = %s\n", azColName[i], argv[i] ? argv[i] : "NULL");
if (!strcmp(azColName[i],"instrument"))
{
instrument = atoi(argv[i]);
if (instrument != wod_reader->instrument) got_new_file++;
}
else if (!strcmp(azColName[i],"observed"))
{
observed = atoi(argv[i]);
if (observed != wod_reader->observed) got_new_file++;
}
else if (!strcmp(azColName[i],"quadrant"))
{
quadrant = atoi(argv[i]);
if (quadrant != wod_reader->quadrant) got_new_file++;
}
else if (!strcmp(azColName[i],"wmo_lat"))
{
wmo_lat = atoi(argv[i]);
if (wmo_lat != wod_reader->wmo_lat) got_new_file++;
}
else if (!strcmp(azColName[i],"wmo_lon"))
{
wmo_lon = atoi(argv[i]);
if (wmo_lon != wod_reader->wmo_lon) got_new_file++;
}
else if (!strcmp(azColName[i],"file_offset"))
{
file_offset = atoi(argv[i]);
}
else if (!strcmp(azColName[i],"num_bytes"))
{
num_bytes = atoi(argv[i]);
}
}
if (got_new_file)
{
if (wod_reader->debug) printf("Need to load a new WOD file!!\n");
// Got a new file, so close the old one...
wod_close_file(wod_reader);
memcpy(type_uppercase,wod_data_types[instrument],3);
type_lowercase[0] = tolower(type_uppercase[0]);
type_lowercase[1] = tolower(type_uppercase[1]);
type_lowercase[2] = tolower(type_uppercase[2]);
// Build up the filename
inname = calloc(strlen(wod_reader->wod_path)+50,1);
memset(inname,0,strlen(wod_reader->wod_path)+50);
// this the type of path that we're trying to construct: ctd/OBS/CTDO7807.gz
sprintf(inname,"%s/%s/OBS/%sO%d%d%0.2d.gz",
wod_reader->wod_path,type_lowercase,type_uppercase,quadrant,wmo_lat,wmo_lon);
if (wod_open_file(wod_reader,inname))
{
sprintf(inname,"%s/%s/OBS/%sO%d%d%0.2d",wod_reader->wod_path,type_lowercase,type_uppercase,
quadrant,wmo_lat,wmo_lon);
if (wod_open_file(wod_reader,inname))
{
printf("Can't open file %s ... non-existant??\n",inname);
free(inname);
inname = NULL;
return(1);
}
}
if (inname) free(inname);
inname = NULL;
}
if (wod_reader->debug) printf("%s: fseeking to %d for %d bytes\n",wod_reader->fname, file_offset,num_bytes);
// Seek to the right spot in the file
fseek(wod_reader->fp,file_offset,SEEK_SET);
// Now read the profile at that location
if ((iend = oclread (wod_reader)) == -1)
{
wod_close_file(wod_reader);
return(1);
}
// Convert the cast to OMG SVP format, this updates the extracted_watercolumns array
wod2svp(wod_reader);
return 0;
}
Now I've just got to build a query building utility in the graphical end that lets the user select instrument, month and/or other fields. Geographic bounds are currently selected by lasso in the map viewer and the month defaults to the current month.
I'm also planning on speeding up the oclread function (was delivered with the free NODC code that came with the WOD), right now it's reading things a single field at a time and I think it can be sped up a bit by reading the entire cast into memory and then parsing it from there.
jb
Here's the bit of code that deals with building the query:
int wod_extract_profiles (WOD_Reader * wod_reader)
{
int rc;
char *zErrMsg = 0;
int i;
int offset;
char month_list[50];
char instrument_list[50];
char sql_string[500];
memset(month_list,0,50);
memset(instrument_list,0,50);
memset(sql_string,0,500);
// There must be easier ways to do this than the way I'm doing it
month_list[0]='(';
offset = 1;
for (i=1;i<=12;i++)
{
if (!wod_reader->query_bounds.month[i]) continue;
sprintf(&month_list[offset],"%d,",i);
offset = strlen(month_list);
}
month_list[offset-1]=')';
// There must be easier ways to do this than the way I'm doing it
instrument_list[0]='(';
offset = 1;
for (i=1;i<=WOD_NUM_TYPES;i++)
{
if (!wod_reader->query_bounds.instrument[i]) continue;
sprintf(&instrument_list[offset],"%d,",i);
offset = strlen(instrument_list);
}
instrument_list[offset-1]=')';
// Check some of the query bounds before we start
if (wod_reader->query_bounds.longitude_west > 180) wod_reader->query_bounds.longitude_west -= 360;
if (wod_reader->query_bounds.longitude_east > 180) wod_reader->query_bounds.longitude_east -= 360;
{
int rc;
char *zErrMsg = 0;
int i;
int offset;
char month_list[50];
char instrument_list[50];
char sql_string[500];
memset(month_list,0,50);
memset(instrument_list,0,50);
memset(sql_string,0,500);
// There must be easier ways to do this than the way I'm doing it
month_list[0]='(';
offset = 1;
for (i=1;i<=12;i++)
{
if (!wod_reader->query_bounds.month[i]) continue;
sprintf(&month_list[offset],"%d,",i);
offset = strlen(month_list);
}
month_list[offset-1]=')';
// There must be easier ways to do this than the way I'm doing it
instrument_list[0]='(';
offset = 1;
for (i=1;i<=WOD_NUM_TYPES;i++)
{
if (!wod_reader->query_bounds.instrument[i]) continue;
sprintf(&instrument_list[offset],"%d,",i);
offset = strlen(instrument_list);
}
instrument_list[offset-1]=')';
// Check some of the query bounds before we start
if (wod_reader->query_bounds.longitude_west > 180) wod_reader->query_bounds.longitude_west -= 360;
if (wod_reader->query_bounds.longitude_east > 180) wod_reader->query_bounds.longitude_east -= 360;
// the hits return in WOD filename order to minimize jumping around between files
sprintf(sql_string,"SELECT * FROM tbl1 WHERE latitude > %.3f AND latitude < %.3f AND longitude > %.3f AND longitude < %.3f AND month in %s AND instrument in %s ORDER BY instrument, observed, quadrant, wmo_lat, wmo_lon, year, month, day;",
wod_reader->query_bounds.latitude_south, wod_reader->query_bounds.latitude_north,
wod_reader->query_bounds.longitude_west, wod_reader->query_bounds.longitude_east,month_list,instrument_list);
if (wod_reader->debug) printf("Doing SQL string %s\n",sql_string);
printf("Doing WOD query...patience\n");
// Do the query and deal with the results
rc = sqlite3_exec(wod_reader->db, sql_string, wod_query_callback, wod_reader, &zErrMsg);
if (rc) error("SQL error: %s",zErrMsg);
return(0);
}
The callback function (wod_query_callback) in the sqlite3_exec call deals with the query results one at a time. Here'swhat's going on in there:
static int wod_query_callback(void * reader_in, int argc, char **argv, char **azColName)
{
int i;
int iend;
int got_new_file;
WOD_Reader *wod_reader;
char * inname = NULL;
int file_offset, num_bytes;
int instrument, observed, quadrant, wmo_lat, wmo_lon;
char type_lowercase[4];
char type_uppercase[4];
memset(type_uppercase,0,4);
memset(type_lowercase,0,4);
wod_reader = (WOD_Reader *) reader_in;
got_new_file = 0;
// Parse the output record from the query
for(i=0; i
{
if (wod_reader->debug) printf("%s = %s\n", azColName[i], argv[i] ? argv[i] : "NULL");
if (!strcmp(azColName[i],"instrument"))
{
instrument = atoi(argv[i]);
if (instrument != wod_reader->instrument) got_new_file++;
}
else if (!strcmp(azColName[i],"observed"))
{
observed = atoi(argv[i]);
if (observed != wod_reader->observed) got_new_file++;
}
else if (!strcmp(azColName[i],"quadrant"))
{
quadrant = atoi(argv[i]);
if (quadrant != wod_reader->quadrant) got_new_file++;
}
else if (!strcmp(azColName[i],"wmo_lat"))
{
wmo_lat = atoi(argv[i]);
if (wmo_lat != wod_reader->wmo_lat) got_new_file++;
}
else if (!strcmp(azColName[i],"wmo_lon"))
{
wmo_lon = atoi(argv[i]);
if (wmo_lon != wod_reader->wmo_lon) got_new_file++;
}
else if (!strcmp(azColName[i],"file_offset"))
{
file_offset = atoi(argv[i]);
}
else if (!strcmp(azColName[i],"num_bytes"))
{
num_bytes = atoi(argv[i]);
}
}
if (got_new_file)
{
if (wod_reader->debug) printf("Need to load a new WOD file!!\n");
// Got a new file, so close the old one...
wod_close_file(wod_reader);
memcpy(type_uppercase,wod_data_types[instrument],3);
type_lowercase[0] = tolower(type_uppercase[0]);
type_lowercase[1] = tolower(type_uppercase[1]);
type_lowercase[2] = tolower(type_uppercase[2]);
// Build up the filename
inname = calloc(strlen(wod_reader->wod_path)+50,1);
memset(inname,0,strlen(wod_reader->wod_path)+50);
// this the type of path that we're trying to construct: ctd/OBS/CTDO7807.gz
sprintf(inname,"%s/%s/OBS/%sO%d%d%0.2d.gz",
wod_reader->wod_path,type_lowercase,type_uppercase,quadrant,wmo_lat,wmo_lon);
if (wod_open_file(wod_reader,inname))
{
sprintf(inname,"%s/%s/OBS/%sO%d%d%0.2d",wod_reader->wod_path,type_lowercase,type_uppercase,
quadrant,wmo_lat,wmo_lon);
if (wod_open_file(wod_reader,inname))
{
printf("Can't open file %s ... non-existant??\n",inname);
free(inname);
inname = NULL;
return(1);
}
}
if (inname) free(inname);
inname = NULL;
}
if (wod_reader->debug) printf("%s: fseeking to %d for %d bytes\n",wod_reader->fname, file_offset,num_bytes);
// Seek to the right spot in the file
fseek(wod_reader->fp,file_offset,SEEK_SET);
// Now read the profile at that location
if ((iend = oclread (wod_reader)) == -1)
{
wod_close_file(wod_reader);
return(1);
}
// Convert the cast to OMG SVP format, this updates the extracted_watercolumns array
wod2svp(wod_reader);
return 0;
}
Now I've just got to build a query building utility in the graphical end that lets the user select instrument, month and/or other fields. Geographic bounds are currently selected by lasso in the map viewer and the month defaults to the current month.
I'm also planning on speeding up the oclread function (was delivered with the free NODC code that came with the WOD), right now it's reading things a single field at a time and I think it can be sped up a bit by reading the entire cast into memory and then parsing it from there.
jb
Friday, July 30, 2010
sqlite3 and the World Ocean Database
I'm working on getting the World Ocean Database (WOD) to be directly accessible by my svp_tool (sound velocity processing toolkit) and wanted to document some of the procedure I took. Here's a map view of a query area near the Mariana Trench, the green squares show the locations of the 125 CTD casts in the WOD (for the month of August).
The resulting sound speed, temperature and salinity vertical profiles are shown below (left to right, all plotted versus depth on the vertical axis).
The WOD is a database of oceanographic data that records physical/biological/chemical oceanographic measurements, currently the 2009 database holds ~10 million sets of measurements. The data are organized spatially by WMO squares and by instrument type resulting in 12,960 files. I'm mainly interested in XBT and CTD measurements; the files that I need to tuck into and extract temperature and/or salinity profiles based on a user selected geographic region and range of months. The unzipped files are in an awkward ASCII format that take up about 6GB on the disk in gzip format. I'd like to make the lookup options as quick as possible so:
0) Wrap up the sample I/O C program as a C library (wod.{c,h}). The sample program that comes with the database has lots of global variables so I just wrapped it all up in a structure and then wrote some functions to initialize, open/close files, read data that neatly hide away the details and keep the programmer (me) from stepping on my own toes with all those globals and obscure function calls that need to be done just to read a single cast.
1) Build a database. The raw files contain header information (position, date, cruise number, etc) and then the measurements themselves. Having to unzip files just to search through to find casts that intersect the area of interest is slow and painful. I took the sample C program to read the WOD from their website and my new C library (wod.{c,h}) and morphed it into a program that writes out a sqlite3 database that contains only a small subset of the info in the header so that I can quickly query and find casts. The database also contains enough info to determine which of the 12,960 files an individual cast falls within and its file offset. Based on some advice from Kurt, I went with sqlite3 and googled my way through the problem. Here are a few juicy bits from the code, it was pretty simple in the end.
#include
int main()
{
....
// declaration
sqlite3 *db;
....
// open a database with (char * outname)
sqlite3_open(outname, &db);
....
// Create a table, will puke if the table has been added already
sqlite3_exec(db, "create table tbl1 (instrument integer, observed integer, quadrant integer, wmo_lat integer, wmo_lon integer, latitude real, longitude real, year integer, month integer, day integer, file_offset integer)", callback, 0, &zErrMsg);
.....
// Now insert some records
sprintf(sql_string,"INSERT INTO tbl1 (instrument, observed, quadrant, wmo_lat, wmo_lon, \
latitude, longitude, year, month, day, file_offset) VALUES (%d,%d,%d,%d,%d,%.3f,%.3f,%4d,%d,%d,%d)",
wod_reader.instrument,wod_reader.observed,wod_reader.quadrant,
wod_reader.wod_lat,wod_reader.wod_lon,
wod_reader.latitude,wod_reader.longitude,
wod_reader.year,wod_reader.month,wod_reader.day,file_offset);
rc = sqlite3_exec(db, sql_string, callback, 0, &zErrMsg);
// Build indices based on latitude and longitude as I expect to be querying against
// these most often
sqlite3_exec(db, "CREATE INDEX tbl1_idx_longitude ON tbl1(longitude)", callback, 0, &zErrMsg);
sqlite3_exec(db, "CREATE INDEX tbl1_idx_latitude ON tbl1(latitude)", callback, 0, &zErrMsg);
// Now close the database.
sqlite3_close(db);
...
}
Kurt had suggested using the SQLite Manager extension in Firefox to have a peek at what's in the database. I did so, and everything seemed okay.
Next steps?
- add sqlite3 querying ability into svp_tool
- add functionality to wod.{c,h} to read casts based on query results, right now it searches the entire WOD namespace and quickly rejects files that can't possibly intersect the spatial query bounds. Those that do intersect are then unzipped and search sequentially to find the observations that fall within a specified area (and month). ALL records are read through in their entirety even though MOST aren't desired.
- modify wod.{c,h} to deal with reading the .gz files directly through zlib instead of gunzipping and then gzipping the files for each extraction.
The resulting sound speed, temperature and salinity vertical profiles are shown below (left to right, all plotted versus depth on the vertical axis).
The WOD is a database of oceanographic data that records physical/biological/chemical oceanographic measurements, currently the 2009 database holds ~10 million sets of measurements. The data are organized spatially by WMO squares and by instrument type resulting in 12,960 files. I'm mainly interested in XBT and CTD measurements; the files that I need to tuck into and extract temperature and/or salinity profiles based on a user selected geographic region and range of months. The unzipped files are in an awkward ASCII format that take up about 6GB on the disk in gzip format. I'd like to make the lookup options as quick as possible so:
0) Wrap up the sample I/O C program as a C library (wod.{c,h}). The sample program that comes with the database has lots of global variables so I just wrapped it all up in a structure and then wrote some functions to initialize, open/close files, read data that neatly hide away the details and keep the programmer (me) from stepping on my own toes with all those globals and obscure function calls that need to be done just to read a single cast.
1) Build a database. The raw files contain header information (position, date, cruise number, etc) and then the measurements themselves. Having to unzip files just to search through to find casts that intersect the area of interest is slow and painful. I took the sample C program to read the WOD from their website and my new C library (wod.{c,h}) and morphed it into a program that writes out a sqlite3 database that contains only a small subset of the info in the header so that I can quickly query and find casts. The database also contains enough info to determine which of the 12,960 files an individual cast falls within and its file offset. Based on some advice from Kurt, I went with sqlite3 and googled my way through the problem. Here are a few juicy bits from the code, it was pretty simple in the end.
#include
int main()
{
....
// declaration
sqlite3 *db;
....
// open a database with (char * outname)
sqlite3_open(outname, &db);
....
// Create a table, will puke if the table has been added already
sqlite3_exec(db, "create table tbl1 (instrument integer, observed integer, quadrant integer, wmo_lat integer, wmo_lon integer, latitude real, longitude real, year integer, month integer, day integer, file_offset integer)", callback, 0, &zErrMsg);
.....
// Now insert some records
sprintf(sql_string,"INSERT INTO tbl1 (instrument, observed, quadrant, wmo_lat, wmo_lon, \
latitude, longitude, year, month, day, file_offset) VALUES (%d,%d,%d,%d,%d,%.3f,%.3f,%4d,%d,%d,%d)",
wod_reader.instrument,wod_reader.observed,wod_reader.quadrant,
wod_reader.wod_lat,wod_reader.wod_lon,
wod_reader.latitude,wod_reader.longitude,
wod_reader.year,wod_reader.month,wod_reader.day,file_offset);
rc = sqlite3_exec(db, sql_string, callback, 0, &zErrMsg);
// Build indices based on latitude and longitude as I expect to be querying against
// these most often
sqlite3_exec(db, "CREATE INDEX tbl1_idx_longitude ON tbl1(longitude)", callback, 0, &zErrMsg);
sqlite3_exec(db, "CREATE INDEX tbl1_idx_latitude ON tbl1(latitude)", callback, 0, &zErrMsg);
// Now close the database.
sqlite3_close(db);
...
}
Kurt had suggested using the SQLite Manager extension in Firefox to have a peek at what's in the database. I did so, and everything seemed okay.
Next steps?
- add sqlite3 querying ability into svp_tool
- add functionality to wod.{c,h} to read casts based on query results, right now it searches the entire WOD namespace and quickly rejects files that can't possibly intersect the spatial query bounds. Those that do intersect are then unzipped and search sequentially to find the observations that fall within a specified area (and month). ALL records are read through in their entirety even though MOST aren't desired.
- modify wod.{c,h} to deal with reading the .gz files directly through zlib instead of gunzipping and then gzipping the files for each extraction.
Wednesday, April 7, 2010
Follow up on compiling OMG code
So, I mentioned yesterday that I was gearing up to compile a bunch of C-code that I use but then never really described how to get through the compilation step of the code. The code I'm referring to is a bundle of programs put together by the Ocean Mapping Group at the University of New Brunswick in Fredericton, Canada. The group maintains a code repository, to check out a version you'll need to contact someone from the group to get access. This assumes that you've installed Xcode and the extra packages outlined in my post from yesterday. Here it is again, just in case (don't forget to download and install Xcode first):
Step 1 is to get the code repository. You'll need to change "user" to your username given to you by the OMG sys admin folks and machine to the machine they tell you that the repository is stored on.
That'll checkout the repository (name "code") into the directory you're currently sitting in. You can move it anywhere on your file system after you've checked it out.
Step 2 is to set up some environment variables in your shell's configuration files; these direct the code compiling script on where to install things and also update your path with the appropriate directory. I had decided to switch from
Step 3. After that's done, it's time to compile. This uses a mildly flaky script that compiles everything in the appropriate order and then places all the goodies in the locations specified by the environment variables defined earlier. The script is written in
That should do it!
$ fink install lesstif
$ fink install gsl
$ fink install imagemagick
$ fink install gmt
Step 1 is to get the code repository. You'll need to change "user" to your username given to you by the OMG sys admin folks and machine to the machine they tell you that the repository is stored on.
$ svn checkout svn+ssh://user@machine.omg.unb.ca/drives/machine/disk1/code
That'll checkout the repository (name "code") into the directory you're currently sitting in. You can move it anywhere on your file system after you've checked it out.
Step 2 is to set up some environment variables in your shell's configuration files; these direct the code compiling script on where to install things and also update your path with the appropriate directory. I had decided to switch from
tcsh
to bash
since I was already making a transition from PC to Mac, so I had to put together some config files for bash
in my home directory. Here's a dump from the .bash_profile
file that I cobbled together. You'll need to change username
to your username.
PATH=$PATH:/Users/username/local/bin
export PATH
# For OMG software development
export DEVELOPMENT_HOME=/Users/username
# Sneaky way to shoehorn 32 bit architecture into the makefiles
export CCOMPILER="gcc -m32"
export LOCALLIB=/Users/username/local/lib
export LOCALBIN=/Users/username/local/bin
export LOCALOBJ=/Users/username/local/obj
# For subversion, but only if you're doing commits
export SVN_EDITOR=/usr/bin/vi
Step 3. After that's done, it's time to compile. This uses a mildly flaky script that compiles everything in the appropriate order and then places all the goodies in the locations specified by the environment variables defined earlier. The script is written in
tcsh
, so you might want to make sure you have that installed (it was on my Mac, but you have to install it on Ubuntu). The script will dump everything in ~/local
, again, based on the environment variables set earlier. It will also create an error.log file with a listing of everything it had trouble with.
$ cd ~/code
$ ./compile_every_gd_thing
That should do it!
Tuesday, April 6, 2010
Getting my Mac on
I started my new job on Friday and I had come face to face with what seemed like a good idea one month earlier: I was going to switch my digital life from a PC to a Mac. Backstory: I've been living off of a 5 year old Dell Latitude that was dual-booting Windows XP (used only for MS Office) and Xubuntu (9.10) and was sick and tired of rebooting into windows just to have a clean look at Powerpoint and Word files. I'm a command line kind of guy so I figured I could do all of that on a Mac without the Windows hassle. That was the logic behind the move, only time will tell whether it was the right decision. I'm using this blog to document everything I've done during the transition so that I might someday remember all of this if I have to move to another Mac. Maybe it might even be useful to another Mac newbie.
As a few days have passed already, I'm just jotting down stuff from memory. First order of business was to make sure I could compile all the C-code I've been working with over the years. After talking to some co-workers, I found out I need to install this stuff:
That's it for now...
As a few days have passed already, I'm just jotting down stuff from memory. First order of business was to make sure I could compile all the C-code I've been working with over the years. After talking to some co-workers, I found out I need to install this stuff:
- Xcode: this puts all sorts of development goodies at my fingertips, e.g. gcc, X headers for Linuxy graphics programs I use. After signing up to be an official Apple Developer (Mom would be so proud), I downloaded an archived .dmg file which installed simply by dragging it onto my Application folder.
- fink: this lets me get lots of open source code that I use for some of my software development and scripts that I use in my research. Stuff I really needed to get going was the GNU Scientific Libraries, GMT, ImageMagick, lesstif and a few others I can't remember off the top of my head. Fink downloaded as a tarball and was pretty painless to install. It asked an awful lot of questions and I just went with the defaults but found out afterwards that I should have probably chosen the "unstable" branch but it's worked well for me so far.
That's it for now...
Subscribe to:
Posts (Atom)