Followers of this blog and our team's scientific endeavors may know we have a curated database of brown dwarfs we work with. An initial version of this database has been published in Filippazzo et al. 2015 and contains information for 198 objects. The database is also maintained on Github, where we welcome contributions from other researchers. We've developed a set of tools for astronomers to work with SQL databases, namely the Python package astrodbkit. This package can be applied to other SQL databases allowing astronomers from all fields of research to manage their data.
Here we introduce a new tool: AstrodbWeb, a web-based interface to explore the BDNYC database.
“Maybe previous, but never former - once #BDNYC, always #BDNYC!” - Emily Rice
Written By: Stephanie Douglas
The BDNYC group has been around for a while now, and now some of the older members are in positions to provide new opportunities for the current undergrad crowd. Alejandro Núñez and I both joined BDNYC as undergrads: he as a Hunter College student, I as an NSF-funded REU student at the American Museum of Natural History. Then we both chose to do our graduate work at Columbia University - and with the same advisor, Marcel Agüeros. Our group studies rotation and activity in open cluster stars, and we typically receive 10-14 days of time per year on the 2.4m Hiltner telescope at MDM Observatory to take spectra of stars and study their H-alpha emission. This winter, Alejandro and I offered to take along any BDNYC undergrads who were interested in some hands-on observing experience.
The astrodbkit package can be used to modify an existing SQL database (such as The BDNYC Database) but it can also be used to create and populate a SQL database from scratch.
To do this, import the BDdb module and create a new database with
from astrodbkit import astrodb
dbpath = '/path/to/new_database.db'
Then load your new database with
db = astrodb.Database(dbpath)
and start adding tables! The
db.table() method accepts as its arguments the table name, list of field names, and list of data types like so:
db.table('my_new_table', ['field1','field2'], ['INTEGER','TEXT'], new_table=True)
new_table=True is necessary to create a new table. Otherwise, it looks for an existing table to modify (which you could do as well!).
To populate your new database with data, read the documentation here or a summary at Adding Data to the BDNYC Database.
As always, I recommend the SQLite Browser for a nice GUI to make changes outside of the command line.
The Conference for Undergraduate Women in Physics, or CUWiP, is a set of simultaneous conferences taking place across the United States and supported by the American Physical Society. A variety of activities take place in the conferences including plenary talks, panel discussions, student posters and talks, workshops, and graduate school and career fairs.
This year, BDNYC members Victoria DiTomasso, Haley Fica, and Ellie Schwab attended CUWiP. Victoria and Ellie attended the conference held at Wesleyan University, while Haley attended the one held at Georgia Institute of Technology. All three presented posters on the research they carry out with BDNYC. You can find copies of the posters below.
As you make changes to the astrodbkit repository on Github, you may find that the documentation needs updating. Luckily, we use the invaluable Sphinx and the awesome ReadTheDocs to generate the documentation so this is fairly simple.
First, make sure you update the appropriate doc strings (those informative green bits just below the function definition) as this is what Sphinx will use to generate the documentation!
Start in the top level
astrodbkit directory and generate the documentation from the module docstrings with
sphinx-build docs astrodbkit -a
Then cd into the
docs directory and rebuild the html files with
Now move back to the top level directory and add, commit, and push your changes to Github with something like
git add /docs
git commit -m "Updated the documentation for methods x, y, and z."
git push origin <your_branch>
All set! Refresh the page (after a few minutes so ReadTheDocs can build the pages) and make sure everything is to your liking. Well done.
And just in case, here's a great tutorial for getting started with Sphinx and here's the official documentation.
It's time for the 227th meeting of the American Astronomical Society! A number of BDNYC members are there to present talks and posters so be sure to check them out! In this post we list the times and dates various posters/talks so you can go check them. We'll be posting links to the posters as well once the conference is over.
With this coming year, we are aiming to refocus the content in this blog. While this blog does include general purpose information about brown dwarf research, most of the content was geared towards internal descriptions of our software, our database, and our setup that are relevant only for members of BDNYC. We're now using Trac to manage the internal workings of our team and will be changing some of the content you see on these pages.
From now on, you'll see posts describing general tips and tricks, including coding, project management techniques (such as using Trac), and observing tricks. You'll also see posts announcing team publications as well as team presence at conferences such as AAS. Finally, we hope to publish posts describing the small, incremental steps we take as we carry out our research, which include interesting results and plots.
We hope these changes will result in more frequent posts and will make the blog more valuable to the community as a whole.
BDNYC team at Las Campanas Observatory. From left to right: Sara Camnasio, Munazza Alam, Haley Fica, Jackie Faherty
This past September, four members of the BDNYC team travelled to Chile to observe at Las Campanas Observatories (LCO). This team was led by Jackie Faherty and included undergraduate students Munazza Alam, Sara Camnasio, and Haley Fica. Both Munazza and Sara were funded by National Geographic Young Explorers Grants.
Las Campanas Observatories is one of the major telescope facilities in Chile, home to the two Magellan 6.5-meter telescopes and the soon-to-be-constructed Giant Magellan Telescope. For this observing run, the team had a single night on the du Pont 3-meter telescope and two nights on the Baade Magellan telescope. The aim of the run was observing cold brown dwarfs to obtain parallaxes with CAPSCam on the du Pont and FourStar on Baade as well as spectra with FIRE on Baade.
While the weather did not cooperate throughout the run, with a mix of high humidity, high winds, and clouds, that did not deter the team from pushing forward and getting as much data as they could. The all-ladies team has contributed to posts in the Las Campanas Belles blog, which details the adventures of women scientists at Las Campanas and other observatories. The below links direct to each student’s perspective and two summary posts.
Jackie Faherty summary posts:
It's a Ladies Extravaganza at Las Campanas
Thems The Breaks....
Munazza Alam: Maravilla
Sara Camnasio: Setting Foot in Chile (and trying to keep it still)
Haley Fica: A first time Observer
Registering the package
To register the package and deploy the initial release to PyPI, do:
python setup.py register -r pypi
python setup.py sdist upload -r pypi
Deploying a new release
So you've made some cool new improvements to your Python package and you want to deploy a new release. It's just a few easy steps.
- After your package modules have been updated, you have to update the version number in your setup.py file. The format is major.minor.micro depending on what you changed. For example, a small bug fix to v0.2.3 would increment to v0.2.4 while a new feature might increment to v0.3.0.
- Then make sure all the changes are committed and pushed to Github.
- Build with
python setup.py sdist
- Twine with
twine upload dist/compressed_package_filename
To add data to any table, there are two easy steps. As our working example, we'll add some new objects to the SOURCES table.
Step 1: Create an ascii file of the data
First, you must choose a delimiter, which is just the character that will break up the data into columns. I recommend a pipe '|' character since they don't normally appear in text. This is better than a comma since some data fields may have comma-separated values.
Put the data to be added in an ascii file with the following formatting:
- The first line must be the |-separated column names to insert/update, e.g.
ra|dec|publication_id. Note that the column names in the ascii file need not be in the same order as the table. Also, only the column names that match will be added and non-matching or missing column names will be ignored, e.g.
spectral_type|ra|publication_id|dec will ignore the
spectral_type values as this is not a column in the SOURCES table and input the other columns in the correct places.
- If a record (i.e. a line in your ascii file) has no value for a particular column, type nothing. E.g. for the given column names
ra|dec|publication_id|comments, a record with no
publication_id should read
34.567|12.834||This object is my favorite!.
Step 2: Add the data to the specified table
To add the data to the table (in our example, the SOURCES table), import astrodbkit and initialize the .db file. Then run the add_data() method with the path to the ascii file as the first argument and the table to add the data to as the second argument. Be sure to specify your delimiter with
delim='|'. Here's what that looks like:
from astrodbkit import astrodb
db = astrodb.Database('/path/to/the/database/file.db')
db.add_data('/path/to/the/upload/file.csv', 'sources', delim='|')