Deploying Package Releases to PyPI

So you've made some cool new improvements to your Python package and you want to deploy a new release. It's just a few easy steps.

  1. After your package modules have been updated, you have to update the version number in your file. The format is major.minor.micro depending on what you changed. For example, a small bug fix to v0.2.3 would increment to v0.2.4 while a new feature might increment to v0.3.0.
  2. Then make sure all the changes are committed and pushed to Github.
  3. Build with python sdist
  4. Twine with twine upload dist/compressed_package_filename

That's it!

Adding Data to the BDNYC Data Archive

To add data to any table in BDNYC.db, there are two easy steps. As our working example, we'll add some new objects to the SOURCES table.

Step 1: Create an ascii file of the data

First, you must choose a delimiter, which is just the character that will break up the data into columns. I recommend a pipe '|' character since they don't normally appear in text. This is better than a comma since some data fields may have comma-separated values.

Put the data to be added in an ascii file with the following formatting:

  1. The first line must be the |-separated column names to insert/update, e.g. ra|dec|publication_id. Note that the column names in the ascii file need not be in the same order as the table. Also, only the column names that match will be added and non-matching or missing column names will be ignored, e.g. spectral_type|ra|publication_id|dec will ignore the spectral_type values as this is not a column in the SOURCES table and input the other columns in the correct places.
  2. If a record (i.e. a line in your ascii file) has no value for a particular column, type nothing. E.g. for the given column names ra|dec|publication_id|comments, a record with no publication_id should read 34.567|12.834||This object is my favorite!.

Step 2: Add the data to the specified table

To add the data to the table (in our example, the SOURCES table), import and initialize the .db file. Then run the add_data() method with the path to the ascii file as the first argument and the table to add the data to as the second argument. Be sure to specify your delimiter with delim='|'. Here's what that looks like:

from BDNYCdb import BDdb
db = BDdb.get_db('/path/to/the/database/file.db')
db.add_data('/path/to/the/upload/file.csv', 'sources', delim='|')

That's it!

BDNYC at AAS 225

BDNYC (and friends) are out in force for the 225th meeting of the American Astronomical Society!

Please come see our posters and talks (mostly on Monday). To whet your appetite, or if you missed them, here are some samplers:

Munazza Alam (Monday, 138.40)
High-Resolution Spectral Analysis of Red & Blue L Dwarfs
AAS225_Munazza_posterSara Camnasio (Monday, 138.39)
Multi-resolution Analysis of Red and Blue L Dwarfs
AAS225_Sara_posterKelle Cruz and Stephanie Douglas (Monday, 138.37)
When good fits go wrong: Untangling Physical Parameters of Warm Brown Dwarfs

AAS225_Stephanie_posterStephanie Douglas (Monday 138.19)
Rotation and Activity in Praesepe and the Hyades

AAS225_Steph_poster2Jackie Faherty (Talk, Monday 130.05)
Clouds in the Coldest Brown Dwarfs

Joe Filippazzo (Monday, 138.34)
Fundamental Parameters for an Age Calibrated Sequence of the Lowest Mass Stars to the Highest Mass Planets
Joe Filippazzo - AAS225

Paige Giorla (Monday, 138.44)
T Dwarf Model Fits for Spectral Standards at Low Spectral Resolution
AAS225_Paige_posterKay Hiranaka (Talk, Monday 130.04D)
Constraining the Properties of the Dust Haze in the Atmospheres of Young Brown Dwarfs

Erini Lambrides (Thursday, 432.02)
Can 3000 IR spectra unveil the connection between AGN and the interstellar medium of their host galaxies?

Emily Rice (Tuesday, 243.02)
STARtorialist: Astronomy Outreach via Fashion, Sci-Fi, & Pop Culture
AAS225_STARtorialistAdric Riedel (Monday, 138.38)
The Young and the Red: What we can learn from Young Brown Dwarfs

Using WriteLaTeX for Collaborative Papers

Update: WriteLaTeX is now Overleaf

We all love Google Docs. It's a functional and convenient way to share and collaboratively edit documents across platforms, time zones, and even continents. We in the BDNYC group use it extensively.

But what if you want to write a scientific paper? Google Docs, as awesome as it is, is not much more than a word processor. We want the internal hyperlinks for sections, figures, tables, and citations, elegant mathematical formulae, well-formatted tables, more control over where and how our components are arranged - in a word, LaTeX. Yes, LaTeX has its own host of problems, but it's very good at what it does.

There are a number of collaborative editing projects out there - Authorea springs to mind. But one of the simpler options out there is actually pretty good: WriteLaTeX.
Continue reading

Example Query: SpeX Prism Spectra for Very Red, Field Age, Early-L Dwarfs


Here we needed the SpeX Prism spectra for all the field age L0-L5 dwarfs with J-Ks color greater than a certain value that is different for each spectral type.

To perform this query we needed to join the SOURCES, SPECTRAL_TYPES, PHOTOMETRY, and SPECTRA tables by source_id. Then in the 'where' clause, we had to specify:

  • Our band 1 is 'J' and band 2 is 'Ks' both from system 2 (2MASS) to get the J-Ks color,
  • The instrument_id for the spectra must be 6 (SpeX),
  • The gravity must not be \beta or \gamma if it is field age,
  • And then we joined a bunch of conditionals for each spectral type/color requirement in parentheses with 'or' operators.

import BDdb
db = BDdb.get_db('/path/to/Dropbox/BDNYCdb/BDNYC.db')
red_field_Ls = db.query.execute("SELECT, a.names, b.spectral_type, b.gravity, (c1.magnitude-c2.magnitude) AS color, d.wavelength, d.flux, d.unc FROM sources a JOIN spectral_types b ON JOIN photometry c1 ON JOIN photometry c2 ON JOIN spectra d ON WHERE b.gravity IS NULL AND'J' AND c1.system=2 AND'Ks' AND c2.system=2 AND d.instrument_id=6 AND b.regime='OPT' AND ((color>=1.3 AND b.spectral_type=10) OR (color>=1.35 AND b.spectral_type=11) OR (color>=1.48 AND b.spectral_type=12) OR (color>=1.64 AND b.spectral_type=13) OR (color>=1.69 AND b.spectral_type=14) OR (color>=1.72 AND b.spectral_type=15))").fetchall()

This returns a list of the form [source_id, name, optical spectral type, gravity suffix (which should be None), J-Ks color, wavelength array, flux array, uncertainty array]

Example Query: High Resolution NIRSPEC data


Here we needed the high resolution J band spectra for NIRSPEC orders 61 and 65 for all available objects in the database. In this snippet, we use the dict method instead of the usual query method so we can dump all of our spectra and meta data into a dictionary D.

import BDdb
db = BDdb.get_db('/path/to/Dropbox/BDNYCdb/BDNYC.db')
D = db.dict.execute("select * from spectra where wavelength_order in (61,65)").fetchall()

Then we can just use a list comprehension to iterate through the list and grab all the data like a Python dictionary with the column names as keys, i.e. D[n]['flux'] gives you the flux array, D[n]['source_id'] the source_id, etc. for the n results in your query.

Documentation using Sphinx

Sphinx is a free, online tool that helps generate documentation for primarily Python projects.  We use it for HTML format outputs, but it also supports LaTeX, ePub, Texinfo, manual pages, and plain text formats.  The markup language of Sphinx is reStructuredText (reST) (for documentation see  Of Sphinx's many features, the one most useful thus far is its auto-documentation of Python files, the steps of which this post details below.

Sphinx auto-documentation steps:

  1. Install Sphinx at the command line using: easy_install -U Sphinx
  2. Read through the Sphinx documentation at
  3. Run Sphinx's Getting Started function Sphinx-quickstart to quickly set up directories and configurations. NOTE: Since I have already completed this setup, it DOES NOT need to be run again for the BDNYC documentation.
    1. Access the documentation directories in Dropbox through the shared folder BDNYCdb/Documentation
    2. The Documentation folder contains the .rst and .py files that Sphinx uses to build documentation.  Place any files for which documentation is to be made within that directory since Sphinx automatically looks there.
      Screen Shot 2014-08-07 at 2.46.31 PMBDdb.html within the Documentation directories
  4. The index.rst file is the master file that contains the table of contents (known as a TOCtree) for the entire documentation.  Within  the TOCtree, Sphinx knows that all the files are .rst, so there is no need to specify the file extension or path.  The index also serves as a welcome page for users, so it may include an introduction or other information that would be useful for outside reference.  Within the TOCtree (and all of reST) indentation and notation is especially important.  Below is an example of a functioning TOCtree.  .. toctree::
    :maxdepth: 2
    # blank line

    The "maxdepth" function specifies the depth of nesting allowed.  In order to keep the index clean and aesthetically simple, a maximum of one nested page is reasonable. In other files, however, highly nested pages are not only common, but well-supported by reST.
    Screen Shot 2014-08-07 at 2.52.07 PM
    Screenshot of index.rst
  5. For each item in the TOCtree of the index file, there must be a corresponding .rst file in the main directory for Sphinx to nest within the index HTML file.  In the above example, a Database_instructions.rst file might contain in plain English how to interact with the database, while the BDdb.rst file contains the auto-documentation of
  6. The .rst file within the TOCtree of the index must be formatted like below to correctly call for the Python files. [name of module]
    ================    # make page header (see
    .. automodule:: [name of Python file w/o extension]
    :members:    # tells Sphinx to pull from the file
    # blank line

    Screen Shot 2014-08-07 at 2.57.41 PM

    Screenshot of BDdb.rst

  7. The referenced Python files for auto-documentation can either be in located in the main directory, or their location can be specified.  Since we will be constantly pushing and pulling from Github, it is best for those doing documentation to change the file path to their local copy of,, etc, before making documentation.  This can be achieved as follows:
  8. Open, which is the configuration file for Sphinx.  It was produced during the "quickstart" function to allow easy modifications of aesthetic features (i.e., theme, font), and facilitate the HTML building process.
    1. Change the file path in line 21 to the applicable local copy.
    2. Comment out the file path of previous users in order to make it easier for others to update documentation.
  9. In order to make the HTML files, move Terminal to the directory of the documentation, i.e., '.../Dropbox/BDNYCdb/Documentation' and run make html.
    1. The make htmlfunction comes build-in through running "quickstart" and uses the makefile in the main directory.  It can take other functions as well, including make clean, which deletes the entire contents of the "build" directory.
    2. Sphinx should put the generated HTML files in the /_build/html directory.
  10. To make the HTML files live on, use the application Cyberduck, which can be downloaded at
    1. Click "Open Connection" and put in the server, username, and password.
    2. Drag and drop any .rst and HTML files that were modified into the appropriate directory and the documentation should go live immediately!


References: -- Sphinx documentation homepage -- Outline of Sphinx documentation -- Using reStructuredText in Sphinx -- Auto-documentation documentation -- Download Cyberduck

Rotational and Vibrational Energy of Diatomic Molecules

Molecules can store energy in 4 different ways: translation (whole molecule moves in each of 3 Cartesian coordinates), rotation (whole molecule rotates—linear molecules can rotate around 2 axes, nonlinear around all 3 axes), vibration (bonds stretch and contract—3N-5 kinds of vibrations for linear molecules, 3N-6 for nonlinear), and electronic (ground state vs. excited). The energy absorbed in electronic transitions is usually in the UV-visible range, that for vibrations is in the IR range, for rotations in the microwave range, and the transition energy for translations is basically infinitesimal. This is because electronic transitions require the most energy, then vibrations, rotations and finally translations require the least. With high enough resolution, rotational transitions can be seen in vibrational spectra, with the rotational transitions giving structure to the vibrational absorption peaks.

Figure 1: An energy diagram showing the relative energies for rotational, vibrational, and electronic transitions and how they overlap.3

The energy of rotational and vibrational levels can be calculated separately and summed. Vibrations can be modeled using the simple harmonic oscillator—the potential energy of which comes from Hooke’s law. Rotational energy of a molecule can be calculated by assuming the molecule acts as a rigid rotor and solving the corresponding Schrodinger Equation. Both of these types of energy are quantized, which accounts for the vibrational (v) and rotational (J) quantum levels in the equation.

This equation will give energy levels in Joules. Vibrational spectroscopists generally use units of wavenumbers, or cm-1, though, which can be found by dividing the entire equation by hc.



  • v: vibration quantum number (0,1,2,…)
  • J: rotation quantum number (0,1,2,…)
  • k: spring constant/strength of bond
  • μ: reduced mass
  • I: moment of inertia, where is bond length
  • B: rotational constant

I began looking into this to explain why multiple carbon monoxide absorption peaks appeared where they did in my near-IR spectra at 2.29 and 2.32 microns or 4367 and 4310 cm-1. I was able to calculate the correct 2.29 μm peak by subtracting the energy where v=0 and J=8 from the energy where v=2 and J=10. This means that these peaks are due to the first vibrational overtone of CO, i.e. Δv=2 instead of 1. The peak at 2.32 and any other peaks around the same area can be found by going from different rotational energy states, 6 to 8, 4 to 6, etc. ΔJ for CO is usually ±2 because it is a linear molecule.

  1. McQuarrie, Donald A. Quantum Chemistry. 2nd ed. Sausalito: University Science Books, 2008.
  2. Atkins, P.; de Paula, J. Physical Chemistry. 8th ed. New York: W. H. Freeman and Company, 2006.
  3. “Electronic Spectra of Organic Molecules”. Organic Chemistry. Accessed July 24, 2014.

On solving the Rigid Rotor Schrodinger Equation:ödinger_Equation

Special thanks to Professor Andrew Crowther.


Adding Spectra to the BDNYC Data Archive

Once your BDNYC development environment is all setup, adding spectra to the database is relatively simple and all done through Terminal.

Open Terminal and then launch iPython from the command line with ipython --pylab.

Next, import the BDdb module and load the actual database with:

import BDdb
db = BDdb.get_db('/Users/your_computer/Dropbox/BDNYCdb/BDNYC.db')

Now you're all set to start adding spectra. There are two types of files that we typically use for spectra: ascii files and FITS files.

Adding ASCII Files

Ascii files are just text files with two or three columns of data, which should always be the wavelength, flux and (if three columns) the uncertainty values at each point. Adding this data is super simple:

db.add_ascii('/path/to/the/ascii/file', source_id)

So how do you find the source_id for your object? Just use the identify() function. For example, say my object is '2MASS J00361617+1821104'. I would search by the first four digits of the object name, in this case:


This may return a number of different sources, all with '0036' appearing somewhere in their name, so I would then visually inspect to find the source I'm looking for. In this case, the source_id is 86.

The file path and the source_id are the only required information to provide to add this type of data though the more columns you can complete the better! The values of the optional columns can just be entered in the function as arguments, e.g. telescope_id=6. The other columns include:

Argument Description
wavelength_units Units of the wavelength axis, e.g. 'um' for microns, 'A' for angstroms
flux_units Units of the flux axis, e.g. 'erg/s/cm2/A', 'W/m2/A', 'Jy', etc.
wavelength_order For high resolution data only, e.g. 58, 59, 60, etc.
publication_id The id from the PUBLICATIONS table that corresponds to the paper where the data was first published.
obs_date The date of the observations as YYYY-MM-DD
instrument_id The id from the INSTRUMENTS table that corresponds to the instrument used to record the spectrum.
telescope_id The id from the TELESCOPES table that corresponds to the telescope used to record the spectrum.
mode_id The id from the MODES table that corresponds to the instrument mode used to record the spectrum.
airmass The airmass at the time the spectrum was recorded, which is any value between 0 and 2.
comment Comments about the spectrum that a future user of the data might need to know!

So as an example, if I want to add a NIR spectrum to the database taken with the SpeX Prism instrument on the NASA IRTF, my code should look like:

db.add_ascii('/Users/Joe/Desktop/NIR_spectra/2m0036_nir.txt', 86, telescope_id=7, instrument_id=6, mode_id=1, airmass=1.0263, obs_date='2013-01-17)

Note: the order of the optional arguments does not matter!

How do you know the telescope_id, instrument_id, mode_id, or publication_id? You can browse these tables in Terminal by running the table name as a method on the database instance or by searching the table with some keyword.

For example db.instruments() will show you every instrument_id and name. But db.instruments('SpeX') will return only those records that have 'spex' in the name.

The naming convention for publications is the first four letters of the first author's last name and then the two digit year. So I could try db.publications('Kirk01') to find Kirkpatrick et al. 2001 or just db.publications('Kirk') to see all Kirkpatrick first author publications.

Otherwise, just launch the SQLite Database Browser and load the appropriate table.

All the other values should be in the ascii filename (e.g. 2m0036_2013may02_58.txt might be the object name, obs_date and wavelength_order) or in the ascii file itself above the data columns. Sometimes you might just know the correct information and can add it where possible. Otherwise you can always just leave these blank.

Add FITS Files

Adding spectra in FITS files works exactly the same way but with a little more automation.

Just like adding an ascii file above, you simply do:

db.add_fits('/path/to/the/fits/file.fits', source_id)

The automation comes in since many FITS files have a metadata "header" with common keys like 'OBS_DATE', 'TELESCOP', 'INSTRUME', etc., which BDdb automatically tries to pull out and insert into our new spectrum record.

However, you can still fill in blanks and/or override the automation by simply adding in the optional arguments as we did above with db.add_ascii().

OOPS! Don't panic!

If anything gets entered incorrectly, remember that you can always just open up the SQLite Database Browser, navigate to the record you just added in the SPECTRA table, and double-click the cells you need to edit.

Or you can just select the row of your erroneous record and click the Delete Record button, and then try adding the spectrum again.

Use of the DISTINCT function in an SQLite query

When using an sqlite query to pull from our database, you may run into the issue where you only want to see the desired field entry once, and the resulting list or dictionary is giving it to you multiple times.

An Example

Say you want to get the source_id from the spectra table where the spectral_type is an L dwarf and the spectra regime is NIR. Meaning, you want to know which L dwarfs we have NIR spectra for. Well, the output will give you a list of all of the entries we have spectra for, even if some entries overlap on source_id. Each source_id will show up in the list per the number of separate NIR spectra we have for that object. If  you only want to see the source_id once, we can use a function tag called DISTINCT. It follows like:

SELECT DISTINCT sp.source_id FROM spectra AS sp JOIN spectral_types AS spt ON spt.source_id=sp.source_id WHERE spt.spectral_type BETWEEN 10 AND 19 AND sp.regime='NIR'

(See the joining tables post if confused on how to do so.)

*NOTE: Here I used another function BETWEEN instead of writing out spt.spectral_type>=10 AND spt.spectral_type<20