It’s possible to define a tensor in Ruby using the Array
class of Ruby but it gets tedious when defining multi-dimensional tensors. Also, the Array
object is designed to be heterogeneous which means that the elements of the array can be of different type or different classes which would seem as a plus point overall but it has a huge downside. Due to Array
being heterogeneous, the memory allocation has to be in such a way that any element of any size can be added or removed from the array, which causes a lot of re-allocations. Also, the indexing and other array functions gets slower due to the heterogeneous nature.
What if there’s a scenario where there is only one type of tensor elements, which means that a homogeneous array would also do, and there are memory and speed constraints? NumRuby
is the solution for such requirements.
A tensor can be defined using the NMatrix
object of NumRuby
.
1
|
|
shape
is the number of dimensions and size of each dimension of the tensor. For example, [2, 2, 2]
shape tensor is a tensor with 3 dimensions and each dimension of size 2, hence number of elements is 8. A sample value of elements array for this could be [1, 2, 3, 4, 5, 6, 7, 8]
. type
is the data type of each of the tensor element, it could be any of :nm_bool
, :nm_int
, :nm_float32
, :nm_float64
, :nm_complex32
or :nm_complex64
depending on the requirements.
An example usage is shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
One can also perform elementwise operations using NumRuby
. Elementwise operations are broadly of 2 types, Uni-operand and bi-operand.
Uni-operand operators are those that apply to just one tensor. For example, sine, cos or tan of each of element of the tensor.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Bi-operand operators are those that apply to two tensor. For example, addition, subtraction or multiplication of each of the corresponding elements of the the 2 tensor.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
NumRuby
also supports linear algebra capabilities for 2-dimensional tensors. One can easily do operations such as matrix inverse, dot product, matrix decompositions.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
With GSoC 2019 coming to an end, this is my final blog which mentions all my work for the project Rubyplot.
RubyPlot is a plotting library in Ruby for scientific development inspired by the library Matplotlib for Python. Users can create various types of plots like scatter plot, bar plot, etc. and can also create subplots which combine various of these plots. The long-term goal of the library is to build an efficient, scalable and user-friendly library with a backend-agnostic frontend to support various backends so that the library can be used on any device.
Creating graphs in Rubyplot is very simple and can be done in just a few lines of code, for example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Has the output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
|
Has the output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
Has the output:
Rubyplot started as two GSoC 2018 projects by Pranav Garg(@pgtgrly) and Arafat Dad Khan(@Arafatk) and the mentors from The Ruby Science Foundation(SciRuby), Sameer Deshmukh(@v0dro), John Woods(@mohawkjohn) and Pjotr Prins(@pjotrp). Pranav Garg worked on the GRRuby which had the GR backend and Arafat Dad Khan worked on Ruby Matplotlib which had the ImageMagick backend. The ultimate goal of combining both and creating Rubyplot. After GSoC 2018, Sameer Deshmukh combined both projects and created Rubyplot and he has maintained it ever since. Around May 2019, I started working on Rubyplot as a part of GSoC 2019.
As a part of GSoC 2019, my project had 3 major deliverables:
1. ImageMagick support(Phase 1): Support for ImageMagick back-end will be added in addition to the currently supported back-end GR, the front-end of the library will be back-end agnostic and the current overall integrity of the library will be preserved.
2. Plotting and show function(Phase 2): A new plot function will be added which plots markers (for example circles) to form a scatter plot with the points as inputs (same as plot function in Matplotlib). A new function show will be added which will allow viewing of a plot without saving it. This plot function will be back-end agnostic and hence will support both GR and Magick back-end.
3. Integration with iruby notebooks(Phase 3): Rubyplot will be integrated with iruby notebooks supporting all backends and allowing inline plotting.
As a part of GSoC 2019, I completed all the deliverables I had initially planned along with a tutorial for the library and some other general improvements.
Details of my work are as follows:
During Phase 1, I focused on setting up the ImageMagick backend which involved the basic functionality required for any backend of the library which are X-axis and Y-axis transform functions, within_window
function which is responsible for placing the plots in the correct position, function for drawing the X and Y axis, functions for drawing the text and scaling the figure according to the dimensions given by the user. I implemented these functions using internal rmagick functions which were very useful like scale
, translate
, rotate
, etc.
After this, I worked on the scatter plot, which was the first plot I ever worked on. This plot had a very particular and interesting problem, which was that different types of markers were internally implemented in the GR backend, but for ImageMagick backend, I had to implement everything using basic shapes like circles, lines, polygons and rectangles. To solve this I created a hash of lambdas which had the code to create different types of markers using the basic shapes.
After this I implemented all the simple plots which Rubyplot supports, these are line plot, area plot, bar plot, histogram, box plot, bubble plot, candle-stick plot and error-bar plot.
So, during Phase 1, I completed the following deliverables -
1. Set up the ImageMagick backend to have the basic functionality.
2. Implemented and tested the simple plots in Rubyplot which are scatter plot, line plot, area plot, bar plot, histogram, box plot, bubble plot, candle-stick plot and error-bar plot.
Code for Phase 1 can be found here.
I started Phase 2 by implementing the multi plots which are multi stack-bar plot, multi-bar plot, multi-box plot and multi candle-stick plot.
Next, I implemented the plot
function which is a combination of scatter plot and line plot, using the plot function the user can easily create a scatter plot or a line plot or a combination of both. The most interesting feature of the plot
function is the fmt
argument which sets the marker type, line type and the colour of the plot using just characters, so instead of writing the name of the type and setting the variables, the user can simply input a string in fmt
argument which has the characters for corresponding marker type, line type and colour.
Next was to implement the show
function which is an alternative to write
function. It draws the Figure and shows it on a temporary pop-up window without the need of saving the Figure on the device, this allows the user to test the code quickly and easily. This was done by using internal functions of the backends which are display
for ImageMagick and gr_updatews
for GR.
So, during Phase 2, I completed the following deliverables -
1. Implemented and tested the multi plots in Rubyplot which are multi stack-bar plot, multi-bar plot, multi-box plot and multi candle-stick plot.
2. Implemented and tested the plot
function with fmt argument.
3. Implemented and tested the show
function.
Code for Phase 2 can be found here and here.
During Phase 3, I integrated Rubyplot with the IRuby notebooks which allow the user to draw figures inside the notebook just by using the show
function, through this integration the user can quickly and easily test the code step by step before running the whole codebase.
I also implemented ticks for ImageMagick backend.
Finally, I created a tutorial for the library which also contains template codes for all the plots which a user can easily get familiar with the working of the library and start using it.
So, during Phase 3, I completed the following deliverables -
1. Integrated Rubyplot with IRuby notebooks with the support for inline plotting.
2. Implemented and tested ticks for Magick backend.
3. Created the tutorial for Rubyplot.
Code for Phase 3 can be found here.
I plan to keep contributing to Rubyplot and also start contributing to other projects of SciRuby.
Future work to be done for Rubyplot is to write documentation, add more tests, add more types of plots, add more backends, make the plots interactive and in future add the feature for plotting 3-Dimensional graphs which would also be interactive.
With this, we come to an end of GSoC 2019. These 3 months have been very challenging, interesting, exciting and fun. I got to learn a lot of things while working on Rubyplot and while interacting with my mentors. I have experienced an improvement in my Software development skills and programming in general which will help me a lot in future. I would love to keep working with SciRuby on more such interesting projects and maybe even try for GSoC again next year ;)
I would like to express my gratitude to my mentor Sameer Deshmukh for his guidance and support. He was always available and had solutions to every problem I faced, I got to learn a lot from him and I hope to learn a lot more from him in the future. I could not have asked for a better mentor.
I would also like to thank Pranav Garg who introduced me to Ruby and also to the SciRuby community. During his GSoC 2018 project, he introduced me to the Rubyplot library and helped me get started with it. His suggestions were very helpful during my GSoC 2019 project.
I would also like to thank mentors from SciRuby Prasun Anand and Shekhar Prasad Rajak for mentoring me and organising the occasional meetings and code reviews. I would also like to thank Udit Gulati for his helpful insights during the code reviews.
I am grateful to Google and the Ruby Science Foundation for this golden opportunity.
]]>GSoC 2019 proposal - https://docs.google.com/document/d/1MR01QZeX_8h7a16nmkOYlyrVt–osB1Yg9Vo0xXYtSw/edit?usp=sharing
I wanna thank Ruby Science Foundation, all mentors and org admins for providing me this wonderful opportunity to enhance my knowledge and work on a project. I would also like to thank Google for organizing such a wonderful program due to which I got introduced to Open-source and Ruby Science Foundation. I especially want to thank my mentor Prasun Anand for guiding me through this period, keeping me motivated and for tolerating my procrastination.
]]>For my summer of code Project I decided to create a plotting library.
From scratch.
In Ruby.
The GSoC 2018 application can be found here.
The code for the project can be found here.
RubyPlot is currently being developed here.
The plotting architecture for the library was inspired by Late Dr John Hunter’s Python Plotting Library “Matplotlib”.
The Matplotlib Architecture is be broadly divided into three layers (as shown in the masterpiece of a figure which I made below). The Backend, The Artist and the scripting layer.
The Backend layer can further be divided into three parts : The Figure Canvas, The Renderer and the Event.
Matplotlib architecture is mostly written in Python with some of its backend (the renderer AGG and Figure canvas ) written in C++ ( The original AGG backend and helper scripts, which is quite tightly bound to python). But the recently the backends are also written in Python using the renderers which have Python APIs. The finer details of the Architecture can be found here.
In interest of the time I decided to go for a iterative process to develop the Library. I decided to use an existing Artist layer. After a lot of discussion we decided to use GR Framework for the same. The only issue was that GR did not have a Ruby API.
To create the C extensions I initially decided to use Fiddle followed by FFi. But this lead to certain issues when it came to handling arrays. Hence I decided to go with the old fashioned Ruby C API to create extensions. The code for the same can be found here.
The Scripting Layer Is meant for high level plotting. The scripting Library created on the GR Framework wrapper has the can plot the following:
ScatterPlots
Line graphs
Bar Plots
Stacked Bar plot
Stacked Bar plot (Stacked along z axis)
Candlestick plots
All the above plots have a lot of customisation options, that can be looked into in the documentation.
Each Figure can have multiple subplots, each subplot can have multiple plots
Here is how the library works.
Figure is the class that a user instantiates this is where all the plotting take place. An instance contains the state of the figure. GR framework is used as the artist layer which does all the plotting on the figure. GR is also the backend.
GR artist layer functions are implemented in C language, we wrap the functions to ruby classes which have the call method which executes the GR function when the Object of the ruby class is called. Each of these ruby classes are called tasks which represents that they perform a task, for example ClearWorkspace performs the task of cleaning the workspace.
Now, the figure is divided into subplots. It is Subplot(1,1,1) by default. So, figure has objects of subplot, each subplot is of type bar plot or line plot etc. These plots are defined in the Plots module which is submodule of Scripting module, the Plots module has a submodule named BasePlots which defines the two bases of plots, LazyBase and RobustBase. Lazy base is for plots which are dependent on state of the figure, for example a bar graph depends on the location of axes. Every lazy plot has a unique call function rather than inheriting it from LazyBase. In LazyPlots the instances of GR Function Classes are called as soon as they are instantiated. This all is done in the call function. Robust base is for plots which are which are independent of the state of the Figure. For example: A scatter plot is independent of the location of axes. Plots which are Sub classes of RobustBase append the instances of GR function classes to tasks when initialized. These instances are called via the call method defined in RobustBase.
So, each subplot which is of type bar plot or scatter plots etc. inherits a base. Now, each subplot is just a collection of some tasks, so it has a task list which stores the tasks to be performed i.e. the Task objects, for example Scatter plot has tasks SetMarkerColorIndex which sets the color of the marker, SetMarkerSize which sets the size of the marker, SetMarkerType which sets the type of the marker and Polymarker which marks the marker of defined color, size and style. Whenever a new Subplot object is initialized, for example subplot(r,c,i), the figure is divided into a matrix with r rows and c columns and the subplot initialized with index i is set as the active subplot ans this active subplot is pushed into the subplot list. Each subplot object has a unique identity (r,c,i) so if the user wants to access a subplot which is already declared, this identity will be used. When the subplot object is called (i.e. to view or save), it first executes some necessary tasks and then pushes the tasks related to bar plot, scatter plot, etc. to the task list.
Figure is a collection of such subplots and so Figure has a subplot list which stores the subplot objects.
These tasks are just stored in the lists and are not performed (i.e. called) until the user asks to view or save the figure i.e. when the user calls view or save (which are tasks themselves) the tasks are performed (i.e. called) and the figure is plotted. This is done by using the Module Plotspace.
When the figure calls the task view or save, these tasks call the Plotspace Object and the state of figure is copied to the Plotspace Object and this Object starts executing( or performing) i.e. calling tasks from task list of each subplot in subplot list and the figure is plotted and viewed or saved.
Here is the current view of Library:
The Library is currently being developed by SciRuby community here. Currently, it is a static library, after further development, it’s Architecture should look like the following:
I would like to thank Sameer Deshmukh and Prasun Anand for guiding me through every step of software design and helping me out through every decision. I would also like to thank Dr John Woods and Dr Pjotr Prins for their valuable feedback. I am glad to be a part of SciRuby community and I hope to further contribute towards it’s goal.
I would also like to thank Arafat Khan, a fellow GSoCer who worked on the library using Rmagick as the backend renderer for our fruitful debates over the architecture of the library
Finally, I would like to thank Google for giving me this opportunity.
]]>At Sciruby, we came across the ambitious task of designing the next Matplotlib for Ruby and so we examined quite a few different plotting libraries and figured that if we could put together ideas from all of these libraries then we can make something really amazing.Primarily our main source of inspiration was Matplotlib. The Matplotlib Architecture is be broadly divided into three layers. The Backend, The Artist and the scripting layer.
Matplotlib architecture is mostly written in Python with some of its backend (the renderer AGG and Figure canvas ) written in C++ ( The original AGG backend and helper scripts, which is quite tightly bound to python). But recently the backends are also written in Python using the renderers which have Python APIs. The finer details of the Architecture can be found here.
Based on Matplotlib our initial plans for the library can be described in this visual.
We decided to build two different ruby libraries independently but with many parallels in their code. Eventually when the project gets completed we will combine them into a single repository and give users the option to use either of the libraries as a backend for construction of the plots.
For the first one, Gr plot is plotting library for ruby that uses the GR framework as a backend. And for the second one, Magick plot is a plotting library that produces quality figures in a variety of hardcopy formats using RMagick as a backend.
Magickplot is an interesting library with many features similar to GRPlot but the internal implementations of both the libraries are radically different. We believe that depending on the use cases the users can find either of them more useful than the other one. So our next goal is to merge them together and give users a simple API to switch back ends easily from GR Plot to Magick Plot….
My work in particular dealt with building Magickplot. The library works in similar thought process to painting where you can give it an empty paper called figure canvas and then you draw upon the figure canvas using plotting features. For all drawing and plotting purposes in this library we use RMagick.
So where are our paint brushes and paints for drawing on the plot?
These base features will let you make Bar, Scatter, Dot, Line and Bubble plots with Magickplot with very accurate geometry. A better walk through of the construction of a single plot with this library can be found in this blog.
My GSoC 2018 application for the project can be found here.
The entire work for Rubyplot can be summarized in these series of blogposts:
Our ultimate goal is to make this project similar to a matplotlib equivalent for Ruby with tons of really amazing customization features. I hope you find this interesting and useful.The library is currently being developed by the Sciruby community, feel free to try it out from github. Any suggestions and recommendations are welcome.
I have been an active contributor to a few open source projects and I have started a few nice ones of my own and I feel really glad to have been introduced to the open source community. I really appreciate the effort by Google Open Source Committee for conducting GSoC every year. It is the best platform for aspiring programmers to improve their skill and give back to society by developing free and open source software.
Thanks to all my Mentors from Sciruby namely, Sameer Deshmukh, Pjotr Prins, Prasun Anand and John Woods. Special thanks to Pranav Garg a fellow GSoCer who is the lead developer of GR-Ruby and a student in Gsoc for Sciruby
]]>Daru-view now presents data in some more visualizations like HighMap and HighStock along with the already implemented HighCharts, GoogleCharts, DataTables and Nyaplot. It provides some more new cool features like formatting Daru::View::Table (GoogleCharts table) with different colors, pattern, etc., exporting charts to different formats, comparing different visualizations in a row and many more. Follow up with these IRuby examples to know the current features equipped in daru-view.
These figures describes the usage of some of the features implemented in daru-view during GSoC.
The GSoC 2018 application can be found here.
The work done during this GSoC has been explained in the following eight blog posts:
The future work involves removing the dependency of daru-view on gems google_visualr
and lazy_high_charts
by creating our own gems. Check out these new ideas that can be implemented in daru-view.
This has been my first attempt to explore the open source community. The summer was filled with the development of open source software and definitely was a great learning experience.
I really appreciate the effort by Google Open Source Committee for conducting GSoC every year. It is the best platform for the aspiring programmers to improve their skill and give back to society by developing free and open source software.
I would like to express my sincere gratitude to Ruby Science Foundation, all the mentors and org admins for providing me this wonderful opportunity to enhance my knowledge and work independently on a project. I especially want to thank Shekhar for guiding me through the journey, helping and motivating me in every possible way.
I am very thankful to Google for organizing such an awesome program.
]]>ArrayFire-rb now supports linear algebra on GPU and CPU. Currently only double dtype has been implemented. It supports dense and sparse matrices. It has multiple backends namely, CUDA, OpenCL and CPU.
(Note: The above benchmarks have been done on an AMD FX 8350 octacore processor and Nvidia GTX 750Ti GPU. CUDA backend of ArrayFire was used with double floating points.)
The figure shows that ArrayFire takes the least computation time of all. For elementwise arithmetic operations, ArrayFire is 2 e 4 times faster than NMatrix for Ruby whereas 2 e 3 times faster than NMatrix for JRuby.
The figure shows that ArrayFire takes the least computation time of all. ArrayFire is 3 e +6 times faster than NMatrix for JRuby and NMatrix for Ruby(not BLAS) whereas 7 e +5 times faster than NMatrix for Ruby(using BLAS).
For LAPACK routines, like calculating determinant and lower-upper factorization, ArrayFire is 100 times faster than NMatrix for JRuby whereas 6 times faster than NMatrix for Ruby(using LAPACKE).
The GSoC 2017 application can be found here.
ArrayFire-rb: The pull request is undergoing a review.
ArrayFire-rb Benchmarks: Codebase can be found here.
Bio::FasterLmmD : Codebase can be found here
The work on creating the bindings have been explained in the following nine blog posts:
I took a side-track working on Bio::FasterLmmD
. This work is not complete and still in progress.
It is an effort to call D from Ruby
. The work has been explained in a previous blog post.
The work on ArrayFire-rb - JRuby has been postponed for now as I wanted to concentrate on MRI for the best results.
The future work involves improving the ArrayFire-rb code and writing tutorials. ArrayFire is not limited to
linear algebra so I will create bindings for Signal Processing, Computer Vision, etc. I will also add support
for data types other than double
.
The work on ArrayFire-rb - JRuby will begin as soon as ArrayFire gem is published.
This has been my second GSoC with SciRuby. It has been more than an year contibuting extensively to FOSS.
I really appreciate the effort by Google Open Source Committee for conducting GSoC every year. It is the best platform for the aspiring programmers improve their skill and give back to society by developing free and open source software.
Last year’s GSoC work helped me to present a talk at FOSDEM 2017 and Ruby Conf India 2017. I got active in the Indian Ruby Community. Recently, I have been invited as a speaker to Ruby World Conference 2017, Matsue, Japan and RubyConf 2017, New Orleans, to talk on “GPU computing with Ruby”.
I plan to continue contributing to open source, strive for improving my skills, and help new programmers contribute to FOSS. I would be glad if I could mentor students for upcoming GSoCs.
I would like to express my sincere gratitude to my mentor Pjotr Prins, for his guidance, patience and support. I have learn a lot from him since my last GSoC and still learning. I couldn’t have hoped for a better mentor.
I am grateful to Google and the Ruby Science Foundation for this golden opportunity.
I am very thankful to John Woods, Sameer Deshmukh, Alexej Gossmann, Gaurav Tamba and Pradeep Garigipati who mentored me through the project.
]]>daru-view
, a plugin gem for daru
.
daru-view
is designed for interactive plotting of charts and tables.It
provide different plotting tools like Nyaplot, HighCharts, GoogleCharts,
DataTable. So you don’t have to write any JavaScript code from these sites
and no need to shift to other language to get charts.
It can work with any ruby web application framework like Rails/Sinatra/Nanoc/Hanami. If you want to try few examples then please look into the
daru-view/spec/dummy_*
examples of Rails, Sinatra, Nanoc web applications.
Now Ruby developers are using IRuby notebook for interactive programming.
daru-view
support IRuby notebook as well. So if you just want to see chart
for some DataFrame or Array of data, you can use daru-view
.
daru-view
can generate chart images to download and save.
daru-view
adapters googlecharts
, highcharts
are able to geneate 3D charts as well.
Table
have some main features like pagination, search and many more to be added.It is
designed to load large data set smoothly.
Daru is doing pretty good work as the data analysis & manipulation in IRuby notebook as well as backend part of web application. Ruby web application frameworks like Ruby on Rails, Sinatra, Nanoc are popular frameworks. So if Ruby developers get the gem like daru which can do data analysis and visualization work in applications, then there is no need of shifting to another language or usage of other gem.
My project for GSoC 2017 was to “make Daru more ready for integration with modern Web framework” in terms of visualization.
To improve in terms of viewing data, daru-view, a plugin gem for daru is created. daru-view is for easy and interactive plotting in web application & IRuby notebook. It can work in frameworks like Rails, Sinatra, Nanoc and hopefully in others too.
To see a quick overview of daru-view’s features, have a look at these examples:
This is how we can create a Plot class object:
1
|
|
data
can be Daru::DataFrame
, data array or the format that the adapter support.
options
is a hash that contains various options to customize the chart.
If you have chosen a plotting library then you must use the options according
to the options the library providing. Here is the library daru-view
uses.
Please check the examples options, they are passing in javascript code:
GoogleCharts: https://developers.google.com/chart/interactive/docs/gallery
HighCharts: https://www.highcharts.com/demo
Nyaplot: https://github.com/SciRuby/nyaplot (it works same as daru
)
Note: User must have some knowledge about the plotting tool(that you want to
use) to use it in daru-view
. So that you can pass the correct options.
Set the plotting library to :googlecharts
to use this adapter. This will
load the required js files in your webpage or IRuby notebook.
1 2 |
|
Let’s create a DataFrame :
1 2 3 4 5 6 7 8 9 10 11 |
|
Now time to plot it:
1 2 |
|
This will return the chart object we created using GoogleCharts. In IRuby notebook, you will see this:
You can find the IRuby notebook example in this link.
These are various charts type we can use e.g. line, area, bar, bubble, candlestick, combo, histogram, org, pie, stepped area chart, timeline, treemap, gauge, column, scatter, etc. We can find the customization options in the google charts site.
Let me try another chart type Geo :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Note: If you have already loaded the dependent JS files for the library then you can use adapter: :googlecharts
in your Plot initialization.
Set the plotting library to :highcharts
to use this adapter. This will
load the required js files in your webpage or IRuby notebook.
1 2 |
|
Let’s pass the data
as HighCharts support (we can pass a DataFrame as well):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
This will return the Plot
object we created.
In IRuby notebook, you will see this:
You can find the IRuby notebook example in this link.
There are various charts type we can use e.g. line, area, bar, bubble, dynamic chart, pie, column, scatter, etc. We can find the customization options in the HighCharts site.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
In IRuby notebook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
This will return the table object we created using GoogleCharts tool. In IRuby notebook, you will see this:
We can create table using Vectors as well.
1 2 3 4 5 6 7 8 9 |
|
In Ruby Notebook:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Currently there is some problem to diplay it in IRuby notebook, but in web application
you can see something like this using df_datatable.div
:
As we know, we can get the HTML, JS code for the chart from the
Daru::View::Plot
or Daru:View::Table
object using #div
method. So just
need to add that HTML, JS code in webpage in proper place.
There is few things to be noted:
In layout of the webpage, you have to load all the dependent JS files.
So that HTML, JS code that is genearted work smoothly in that webpage. You
can load the dependent js file for nyaplot library using Daru::View.dependent_script(:nyaplot)
, similarly for other library.
If you are using multiple library in one webpage then load multiple dependent JS files, in that webpage layout (generally in head tag).
We can set default adapter using Daru::View.plotting_library = :googlecharts
and also we can change it for particular object while initializing object,
i.e. Daru::View::Plot.new(data, {adapter: :googlecharts})
. Just we have
to make sure that dependent JS files are loaded for it.
To make it easy, we have defined daru_chart
(that works same as Daru::View::Plot.new) , daru_table
(works same as Daru::View::Table.new) for Rails application.
So you can easily use it in controller or view of the application. For reference you can check the demo Rails app.
daru-view, currently using Nyaplot, HighCharts, GoogleCharts for plotting the charts. It is also generating tables using DataTables and GoogleCharts with pagination, search and various features.
daru-view mainly uses the adapter design pattern and composite design pattern.
Why Adapter design pattern:
Adapter pattern’s motivation is that we can reuse existing gems if we can modify the interface.
daru-view joins functionalities of independent or incompatible interfaces of different gems.
daru-view
have Plot
and Table
class, which are using a adapter when adapter(library to be used for plotting) is set for Plot
, Table
instance.
Why Composite design pattern:
To define common objects and use it for defining composite objects.
In daru-view
we try to write common functions in a module and include it whenever needed.
daru-view ensure that it’s functions are usable in both IRuby notebook as well as ruby web application frameworks.
The main thing we need to display something in web application or IRuby
notebook is HTML
code of it. daru-view generates the HTML
code of the
chart, table and the same can be used to display in web application & IRuby
notebook.
These are the libraries which is used in daru-view currently:
Nyaplot is good library for
visualization in IRuby notebook only. When we use Nyaplot as the adapter in
daru-view, it is usable in both IRuby notebook and web applications. Daru
DataFrame or Vector is used as the data source of the chart. It works
similar to the initial daru
plotting system.
If user want to use the Nyaplot methods then it can be done on Nyaplot object.We
can get nyplot object using daru_plot_obj.chart
.
i.e.
1 2 3 |
|
Now user can operate all the methods for Nyaplot object. Same thing is for all other adapter in daru-view.
To add the HighCharts features for plotting various chart types, daru-view uses the lazy_high_charts gem with additional features.
In this adapter data source can be Array of data, Daru::DataFrame, Daru::Vector or HTML table code of the data.
There are various of options in HighCharts. One can see the options that can be used in HighCharts demo link, which can be directly used in daru-view Plot.
HighCharts adaptor can work offline as well in daru-view. Developers can update the saved the JS files (in daru-view) using rake task automatically.
If you is familiar with lazy_high_chart
gem and want to use it for
config the chart then user can access the lazy_high_chart
object using
Daru::View::Plot#chart
and can do necessary operations.
To add the GoogleCharts features for plotting various chart types, daru-view uses the google_visualr gem with additional features(in this module more new features are updated).
We want GoogleChart adapter to be very strong since Google chart tools always gets updated and it has amazing plotting features. Similar to the HighCharts module, here also we can use all the options described in Google Charts website.
User can access the google_visualr
object using Daru::View::Plot#chart
, if
they want to operate google_visualr
methods.
One of the good thing about google chart tool is, it can be used for generating table for web application and IRuby Notebook with pagination and other features.
Daru::View::Plot
can take data Array, Daru::DataFrame, Daru::Vector,
Daru::View::Table as data source.
Daru::View::Table
can take data Array, daru DataFrame, Daru Vector as data
source.
DataTables has interaction controls to any HTML table. It can handle large set of data and have many cool features.
To use it, daru-view uses https://github.com/Shekharrajak/data_tables gem. [Note: the gem name will be changed in near future]
It basically uses the HTML table code and add features that user want. So internally HTML table code of Daru::DataFrame and Daru::Vector is passed as data source parameter.
daru-view will be more powerful and simple in near future. Developers can add more libraries in daru-view easily, if required. To add library follow the setups given in CONTRIBUTING.md
The aim of the daru-view is to plot charts in IRuby notebook and ruby web application easily, so that developers need not have to use any other gem or language for visualization.
It can work smoothly in Rails/Sinatra/Nanoc web frameworks and I hope it can work in other ruby frameworks as well, because daru-view is generating the html code and javascript code for the chart, which is basic need of the webpage.
Why not use the plotting libraries directly?
If you are using daru gem for analyzing the data and want to visualize it, then it will be good if you have data-visualization within daru and can plot it directly using DataFrame/Vector objects of daru.
daru-view will be helpful in plotting charts and tables directly from the Daru::DataFrame and Daru::Vector . daru-view using nyaplot, highcharts , google charts right now to plot the chart. So user can set the plotting library and get the chart accordingly.
Most of the plotting libraries doesn’t provide the features of plotting charts in iruby notebook. They are defined only for web applications (mostly for Rails). But daru-view can plot charts in any ruby web application as well as iruby notebook.
I would like to thank to my mentors Sameer Deshmukh ,Lokesh Sharma and Victor Shepelev for their response and support and I am very grateful to the Ruby Science Foundation for this golden opportunity.
I thank my fellow GSoC participants Athitya Kumar and Prasun Anand for their support and discussions on various topics.
Thanks to Google for conducting Google Summer of Code.
]]>“Hello friend. Hello friend? That’s lame.” - S01E01 (Pilot), Mr.Robot
My name is Athitya Kumar, and I’m a 4th year undergrad from IIT Kharagpur, India. I was selected as a GSoC 2017 student developer by Ruby Science Foundation for project daru-io.
Daru-IO is a plugin-gem to
Daru gem, that extends support for many Import and Export
methods of Daru::DataFrame
. This gem is intended to help Rubyists who are into Data Analysis
or Web Development, by serving as a general purpose conversion library.
Through this summer, I worked on adding support for various Importers and Exporters while also porting some existing modules. Feel free to find a comprehensive set of useful links in Final Work Submission and README. Before proceeding any further, you might also be interested in checking out a sample showcase of Rails example and the code making it work.
“Rubyists, Data Analysts and Web Developers, lend me your ears;
I come to write about my GSoC project, not to earn praise for it.”
For the uninitiated, Google Summer of Code (GSoC) 2017 is a 3-month program that focuses on introducing selected students to open-source software development. To know more about GSoC, feel free to click here.
daru is a Ruby gem that stands for Data Analysis in RUby. My initial proposal was to make daru easier to integrate with Ruby web frameworks through better import-export features (daru-io) and visualization methods (daru-view). However, as both Shekhar and I were selected for the same proposal, we split this amongst ourselves : daru-io was allocated to me and daru-view was allocated to Shekhar.
“The open-source contributions that people do, live after them;
But their private contributions, are oft interred with their bones.”
This is one of the reasons why I (and all open-source developers) are enthusiastic about open-source. In open-source, one’s work can be re-used in other projects in accordance with the listed LICENSE and attribution, compared to the restrictions and risk of Intellectual Property Right claims in private work.
“So be it. The noble Pythonistas and R developers;
Might not have chosen to try daru yet.”
It is quite understandable that Pythonistas and R developers feel that their corresponding languages have sufficient tools for Data Analysis. So, why would they switch to Ruby and start using daru?
“If it were so, it may have been a grievous fault;
Give daru a try, with daru-io and daru-view.”
First of all, I don’t mean any offense when I say “grievous fault”. But please, do give Ruby and daru family a try, with an open mind.
Voila - the daru family has two new additions, namely daru-io and daru-view. Ruby is a language which is extensively used in Web Development with multiple frameworks such as Rails, Sinatra, Nanoc, Jekyll, etc. With such a background, it only makes sense for daru to have daru-io and daru-view as separate plugins, thus making the daru family easily integrable with Ruby web frameworks.
“Here, for attention of Rubyists and the rest–
For Pandas is an honourable library;
So are they all, all honourable libraries and languages–
Come I to speak about daru-io’s inception.”
Sure, the alternatives in other languages like Python, R and Hadoop are also good data analysis tools. But, how readily can they be integrated into any web application? R & Hadoop don’t have a battle-tested web framework yet, and are usually pipelined into the back-end of any application to perform any analysis. I’m no one to judge such pipelines, but I feel that pipelines are hackish workarounds rather than being a clean way of integrating.
Meanwhile, though Python too has its own set of web frameworks (like Django, Flask and more), Pandas doesn’t quite work out-of-the-box with these frameworks and requires the web developer to write lines and lines of code to integrate Pandas with parsing libraries and plotting libraries.
“daru-io is a ruby gem, and open-sourced to all of us;
But some might think it was an ambitious idea;
And they are all honourable men.”
As described above, daru-io is open-sourced under the MIT License with attribution to myself and Ruby Science Foundation. Being a ruby gem, daru-io follows the best practices mentioned in the Rubygems guides and is all geared up with a v0.1.0 release.
Disclaimer - By “men”, I’m not stereotyping “them” to be all male, but I’m just merely retaining the resemblence to the original speech of Mark Anthony.
“daru-io helps convert data in many formats to Daru::DataFrame;
Whose methods can be used to analyze huge amounts of data.
Does this in daru-io seem ambitious?”
Daru has done a great job of encapsulating the two main structures of Data Analysis - DataFrames and Vectors - with a ton of functionalities that are growing day by day. But obviously, the huge amounts of data aren’t going to be manually fed into the DataFrames right?
One part of daru-io is the battalion of Importers that ship along with it. Importers are used to read from a file / Ruby instance, and create DataFrame(s). These are the Importers being supported by v0.1.0 of daru-io :
For more specific information about the Importers, please have a look at the README and YARD Docs.
Let’s take a simple example of the JSON Importer, to import from GitHub’s GraphQL API response. By
default, the API response is paginated and 30 repositories are listed in the url :
https://api.github.com/users/#{username}/repos
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
“When working with a team of Pythonistas and R developers;
daru-io helps convert Daru::DataFrame to multiple formats.
Does this in daru-io seem ambitious?
The second part of daru-io is the collection of Exporters that ship with it. Exporters are used to write the data in a DataFrame, to a file / database. These are the Exporters being supported by v0.1.0 of daru-io :
For more specific information about the Exporters, please have a look at the README and YARD Docs.
Let’s take a simple example of the RDS Exporter. Say, your best friend is a R developer who’d like
to analyze a Daru::DataFrame
that you have obtained, and perform further analysis. You don’t want
to break your friendship, and your friend is skeptical of learning Ruby. No issues, simply use the RDS
Exporter to export your Daru::DataFrame
into a .rds file, which can be easily loaded by your friend
in R.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
“You all did see that in the repository’s README;
Codeclimate presented a 4.0 GPA;
Code and tests were humbly cleaned;
with help of rubocop, rspec, rubocop-rspec and saharspec.
Ambition shouldn’t have been made of humble stuff.
Yet some might think it is an ambitious idea;
And sure, they are all honourable men.”
Thanks to guidance from my mentors Victor Shepelev, Sameer Deshmukh and Lokesh Sharma, I’ve come to know about quite a lot of Ruby tools that could be used to keep the codebase sane and clean.
its_call
.“I speak not to disapprove of what other libraries do;
But here I am to speak what I do know.
Give daru-io a try and y’all will love it, not without cause:
Does anything withhold you then, from using daru-io?”
I really mean it, when I discretely specify “I speak not to disapprove of what other libraries do”. In the world of open-source, there should never be hate among developers regarding languages, or libraries. Developers definitely have their (strong) opinions and preferences, and it’s understandable that difference in opinion do arise. But, as long as there’s mutual respect for each other’s opinion and choice, all is well.
“O Ruby community! Thou should definitely try out daru-io,
With daru and daru-view. Bear with me;
My heart is thankful to the community of Ruby Science Foundation,
And I must pause till I write another blog post.”
If you’ve read all the way till down here, I feel that you’d be interested in trying out the daru family, after having seen the impressive demonstration of Importers & Exporters above, and the Rails example (Website | Code). I’m very thankful to mentors Victor Shepelev, Sameer Deshmukh and Lokesh Sharma for their timely Pull Request reviews and open discussions regarding features. Daru-IO would not have been possible without them and the active community of Ruby Science Foundation, who provided their useful feedback(s) whenever they could. The community has been very supportive overall, and hence I’d definitely be interested to involve with SciRuby via more open-source projects.
]]>Lets talk about each of them in detail.
Categorical data is now readily recognized by Daru and Daru has all the necessary procedures for dealing with it.
To analyze categorical variable, simply turn the numerical vector to categorical and you are ready to go.
We will use, for demonstration purposes, animal shelter data taken
from the Kaggle Competition. It is
stored in shelter_data
.
1 2 3 4 5 6 7 8 9 |
|
1 2 3 4 5 6 7 8 9 |
|
Please refer to this blog post to know more.
With the help of Nyaplot, GnuplotRB and Gruff, Daru now provides ability to visualize categorical data as it does with numerical data.
To plot a vector with Nyaplot one needs to call the function #plot
.
1 2 3 4 5 6 7 |
|
Given a dataframe, one can plot the scatter plot such that the points color, shape and size can be varied acording to a categorical variable.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
In a similar manner Gnuplot and Gruff also support plotting of categorical variables.
An additional work I did was to add Gruff with Daru. Now one can plot vectors and dataframes also using Gruff.
See more notebooks on visualizing categorical data with Daru here.
Now categorical data is supported in multiple linear regression and generalized linear models (GLM) in Statsample and Statsample-GLM.
A new formula language (like that used in R or Patsy) has been introduced to ease the task of specifying regressions.
Now there’s no need to manually create a dataframe for regression.
1 2 3 4 5 6 7 |
|
Additionally, through the work of Alexej Grossmann, one can also predict on new data using the model.
1 2 3 |
|
This, I believe, makes Statsample-GLM very convenient to use.
See this for a complete example.
In addition to the aforementioned, there are some other considerable changes:
CategoricalIndex
to handle the case when index column is a categorical data. More about it here.You can read about all my work in detail here.. Additionally, my project page can be found here.
I hope with these additions one will be able to see data more clearly with Daru.
]]>There are installation instructions in the readme and there are already some sample Kernel files in
the spec folder to try out various routines on. Apart from cloning the repository, one additional thing you will need to do is download the SPICE toolkit. You may want to keep the entire compressed file for later, but you’ll only need the cspice.a
file in the lib/
subdirectory. After this follow the instructions in the readme and you should be good to go. (Be sure to run bundle install
to install any dependencies that you don’t already have.)
After you’re done compiling and installing, run rake pry
in the gem root directory.
If you remember, almost any useful task involving the SPICE Toolkit is preceded by loading data through kernel files. The relevant routine to do this is called furnsh_c()
, and the most direct way to access it through Ruby is by calling the function SpiceRub::Native.furnsh
. (However, this is not recommended because SpiceRub has a specific Ruby class unifying all the kernel related methods, and also because SPICE maintains its own internal variables for both tracking loaded kernel files and unloading them.)
Below is a crude dependency sequence:
1 2 3 4 5 |
|
That’s the basic design of the wrapper, so here are a few simple examples on using the Kernel API. =>
denotes the interpreted output of pry.
First of all, the main KernelPool class is a singleton class, that means it can only be instantiated with the #instance
method and the usual #new
is private.
Any subsequent calls to #instance
will produce the same object.
1 2 3 4 5 6 7 8 9 10 11 12 |
|
This is to make sure there is only one KernelPool state being maintained at a time. Now we have a bunch of kernel files in the spec/data/kernels
folder which we’ll use for this example. There is an accessor attribute called @path
which can be used to point to a particular folder.
1
|
|
From here on it’s required that you’re running pry or irb from the gem root folder in order for the paths to match in these examples.
Now let’s load a couple of kernel files, you can type system("ls", kernel_pool.path)
into your console to get a list of all the test kernels available in that folder. The KernelPool object has a #load
method to load kernel files. If the variable path is set, then you only need to enter the file name, otherwise the entire path needs to be provided. An integer denoting the index of the kernel in the pool is returned if the load is successful.
1 2 |
|
Note that this is the same as providing the full relative path of spec/data/kernels/naif0011.tls
when the path
variable is not set or nil.
Let’s load two more files:-
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
So now if you try to view the @pool
member of kernel_pool
, you’ll find three SpiceKernel objects with @loaded=true
and their respective file paths.
There isn’t much to do with a SpiceKernel
object except unload it, or check its state. Note that you can only load kernel files into a KernelPool
object and unload them via the SpiceKernel
object.
You can access the kernel pool by calling the #[]
operator and using the index that was returned on load:
1 2 3 4 5 6 7 8 9 10 11 |
|
So here we unload the first kernel and note that the count drops to 2. If you look up kernel_pool[0]
, you’ll find that the kernel is still present — but its @loaded
variable has been set to false, which means that it has been removed from the CSPICE internal kernel pool.
1 2 |
|
To unload all kernels simultaneously and delete the kernel pool, use #clear!
1 2 3 4 5 6 7 8 |
|
And that about wraps up this blog post on basic kernel handling. Since we know how to load data but not use it yet, I’ll cover that and the various kernel types in the next blog post. Thank you for reading.
]]>Target
: The body of interestFrame
: A rotational frame of reference (Default is J2000 [Not to be confused with the J2000 epoch])Observer
: An observing body whose viewpoint is used to chart the vectorEpoch
: An epoch in Ephemeris TimeSPICE has an integer-key convention for the kind of bodies that it
has support for. Each body can be referenced via a string or an
integer id. While there isn’t an actual strict range for integer ID
classification, it is mentioned here and can be summed up
in the following if
and elsif
clauses. (In Ruby constant strings
are better off as symbols, so the constructor takes either an integer
ID of a string symbol)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
It was very tempting to involve inheritance and extend a base Body
class onto these potential classes, but I simply did not see the need
for it at this point. The way it is at the moment, the Body
object
has a reader attribute type that stores some metadata about the body
for the user’s convenience. Perhaps as coverage of SPICE improves,
this minor thing can be changed later on.
To create a Body object, you instantiate with either a body name or a
body id. Certain bodies such as instruments will require additional
kernels to be loaded. To proceed seamlessly, load a leap seconds
kernel, a planetary constants kernel, and an ephemeris kernel. (All
avaialable in spec/data/kernels
)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
399
and :earth
map to the same body in SPICE data. The frame of
reference can also be specified as a named parameter during
instantiation to set a custom default frame for that particular
object.
1 2 3 4 5 6 |
|
In SPICE, a state
is a 6 length column vector that stores position
and velocity in 3D cartesian co-ordinates
As a base case, let’s find out the the position of the Earth with respect to itself.
1 2 3 4 5 |
|
The origin as seen from itself is still the origin, so this makes
sense. The methods #velocity_at
and #state_at
take an identical
set of parameters. While there is a bit of redundancy going on,
splitting them makes the API more elegant, but the basic relationship
between these three vectors is the following :-
1 2 3 4 |
|
One thing to note is that state/velocity/position vectors will always
be returned as an NMatrix
object, SciRuby’s numerical matrix core,
to allow for calculations via the NMatrix API.
As an example that is used in the code, one line can turn a position vector into distance from origin (here using Euclidean distance):
1 2 |
|
As a simple imprecise experiment, let’s find out how the speed of light can be “estimated” up with this data.
1 2 3 4 5 6 7 8 |
|
The unit of distance here is kilometers, so the speed of light by this measurement is about pretty close to the textbook figure of 3e+8 m/s.
There is also a function to check if a list of bodies are within a
radial proximity from an observing body. We already calculated the
distance of the moon to be about 367,000 km. The function
within_proximity
returns a list of all bodies that are within the
specified radial distance from the calling body object.
1 2 3 4 5 6 7 8 |
|
Now that we’ve come to the end of the functionality, I would like to
mention that there is another named argument aberration_correction
which is basically an error reduction method to provide a more
accurate result than the default observation. The default :none
option for aberration correction basically provides the geometric
observations without any corrections for reception or transmission of
photons. For a list of various aberration correction methods
available, have a look at the documentation for spkpos_c to
find out if you need an aberration correction on SPICE data.
1 2 3 4 5 6 7 |
|
If you want to look at it another way, no aberration correction would give you the textbook response of rigid geometry, while introducing an aberration correction would give you a somewhat more realistic output accounting for the errors that do happen when these observations are made.
Finally, if you need to generate a continuous time series for a body,
then SpiceRub::Time
has two functions to aid in that:
1 2 3 4 5 6 7 |
|
In this case, I took a start time and an end time that was one day
after and requested 4 linearly spaced epochs. This is basically an
interface to NMatrix.linspace
.
The other function requires you to input a start time and an end time and a step size that keeps getting added to the start time till the end time is reached. As a contrived example, we’ll take two epochs, five days apart and ask for a step size of a day, expecting six elements.
1 2 3 4 5 6 7 8 9 |
|
And that’s it for this blog post. I would appreciate any feedback
regarding this as I’ve been juggling the design back and forth very
frequently. There is large potential of expansion of the Body
class,
particularly creating new classes as when different Bobdy objects
would have a corresponding function. (For example, the function
getfov_c
which returns the field of view of an instrument could be
an instance function attached to the Instrument
subclass of Body
,
but this is just potential expansions in the future.)
00:00 A.M. UTC
is 5:30 A.M. IST
for me, and the world remains sane.
But what happens when you accept the fact that you’re just a speck of micro-dust adjusting time relatively for an only slightly bigger speck of dust floating in the universe? Twenty-four hours in a day and thus we reset after 2300, but consider: how would a resident of Venus know when tea-time is on Venus if he had an Earth wristwatch that reset after twenty-four hours? Barely a tenth of Venus’ day is complete in that time! (If you know anybody intent on relocating to Mars, do not gift them a clock or watch.)
So a decimal floating point representation must be the answer for uniformity. Time zones can be dealt with; we’ll just pick a convenient point in time and count the seconds from there onwards so that the location on Earth doesn’t matter henceforth. It’ll drive humans insane with the arithmetic but machines will work just fine with this. This sort of a time system is called epoch time.
And so the internal time of most UNIX machines is the number of
seconds after midnight on Thursday, 1 January 1970 UTC
. (And this
very convention is going to open a can of worms by
2038 if there is even a small set of critical machines
that haven’t moved on from 32-bit architectures.)
But we’re still not okay universally. Try going on an infinite journey to space and you’ll find that counting seconds leads to some inconsistencies with your local time when you try to synchronize with Earth. How can the number of seconds after January 1970 be different in any case? Well, your MacBook Pro has not been adjusted for … relativity! Gravity bends light and thus the perception of time. There’s a lot more mass, and thus a lot more gravitatonal fields in neighborhoods away from Earth. The exact details of how this works is beyond the scope of this blog post.
If the past few paragraphs were incessant and seemingly irrelevant, they were there to drive home the point that Earth time simply will not do when we step out of the ghetto to see what’s happening. But astronomy’s been around for way longer, and astrophysicists came forth with a time system adjusted for the relativity effects of the solar system, called Barycentric Dynamical Time, or TDB. Like our machines, it counts the seconds after a certain known reference time point, except that it adjusts for relativity and can become a standard for astronomical time.
There are many similar time scales like this, but SPICE has chosen to
use TDB as the standard for most of their design. Within the SPICE
API, TDB is the same as Ephemeris Time which is the main system used
to specify time points of astronomical bodies. Even though spacecrafts
have their clocks coordinated with UTC on Earth, you would require
that time in Ephemeris Time to be able to calculate their positions and
velocities using SPICE. SpiceRub::Time
is built for this very purpose,
to revolve around a main attribute @et
for Ephemeris Time and
provide many methods to convert to and from.
If you’d like to proceed with the examples, you’ll need a Leap Second
Kernel file to use SpiceRub::Time
. This is a generic kernel, so you
can easily use naif0011.tls
in spec/data/kernels
of the repository
folder.
So Ephemeris Time is the number of seconds elapsed after Noon, January 1, 2000, TDB
. This point in time is also known as the J2000
epoch. We find that out in an instant by using the Time.parse
function which is a wrapper function for SPICE’s str2et_c
that
converts many formats of strings to Ephemeris Time
. You can have a
look at the various string formats supported in its documentation
here
1 2 |
|
So as a base case, using the reference epoch gives us 0 seconds as we would expect. Now would also be a good time to find out the discrepancy in UTC
as well.
1 2 |
|
So right away we know that UTC was 64-ish seconds off from TDB / ET at the time of the reference J2000 epoch. What would the difference be around right now?
1 2 3 4 5 |
|
Well, here’s a surprise, it’s 68.18 now. Before I explain why that is, here is a brief overview of what the above code does:
Time.now
is a convenient way to specify your current UTC
timezone. It uses Ruby’s core Time.now
method so this method is only
good if you’re working in UTC or Earth like Timezones. For a similar
purpose, the function Time.from_time
let’s you create a SpiceRub
Time object from a Ruby Time object.
The +/-
operators return a new Time object where the right operand
is added/subtracted to the left operand’s @et
when it is a float or
integer. If a Time object is supplied , then it does the same with the
right operand’s ephemeris time instead. (Note that there really isn’t
a significant meaning to having a Time object whose @et is the
difference/sum of two other epochs, however you can increase a certain
epoch or decrease it by a constant offset of seconds)
In our case we used #to_utc
to convert from ephemeris time to UTC,
and then the minus operator gave us a Time object that wasn’t really
an epoch, but a difference of two epochs, so using #to_f
got us
exactly that.
It appears that UTC has changed by 4 seconds since 2000 with respect to ephemeris time. This is actually the adjustment of “leap seconds” that gets added to UTC to prevent it from falling too far behind other time systems. (Humans really like to hack everything, don’t they?)
To verify this yourself, if you open up the kernel naif0011.tls
in your
text editor and search for DELTET/DELTA_AT
, you’ll find a list like
representation of the following sort :-
1 2 3 4 5 6 7 |
|
Here you can see that just before the year 2000, there were 32 leap seconds added to UTC, and in 2015 when the last leap second was added, there were 36. It’s an ongoing and indefinite process and so there really is no way to account for leap second errors far in the future for leap seconds that are yet to be added. As of now, the next scheduled addition is in December, 2016.
Coming back to our Time object, let’s look at its basic
construction. One tricky task in the API was the option to specify
different epochs of reference in different time scales, like
International Atomic Time. As of now, Time.new
requires that you
have kept your word of using the J2000 epoch and allows you to use a
named parameter seconds:
for specifying the time scale. The use of
scale
as a key was avoided as it sometimes is also used to refer to the
reference epoch used.
1 2 3 4 5 6 |
|
:tai
here refers to International Atomic Time. For a list of more
parameters and their keyword abbreviations, have a look at
this SPICE documentation for the function that the
conversion is wrapped on top of.
But there is also a way to reference other epochs without doing the
manual conversions yourself, you can call the class method Time.at
to perform the same function as the constructor, with the option of a
different reference epoch. The resultant Time object will however have
its internal time referring to J2000.
A more readable way would involve step by step calculations, but that
would consume runtime resources everytime Time.at
is called, so I’ve
basically pre-calculated the ephemeris times of the reference epochs
and subtracted them from the epoch.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
To quickly verify the last one with the #to_s
method:
1 2 |
|
It’s exactly the UNIX epoch! Let’s try out 1 day (86400 seconds) after this epoch:
1 2 |
|
Just a second short of heading into the next day, because we’ve added 86400 TDB seconds and converted the time into a UTC string.
There are some more functions provided to work in tandem with the
Body
class that I’ll explain more about in the next blog post, but
this more or less covers the core of SpiceRub:Time
. Till then,
thanks for reading.
I worked on “Port NMatrix to JRuby” in the context of the Google Summer of Code (GSoC) 2016 and I am pleased to announce that NMatrix can now be used in JRuby.
With JRuby NMatrix, a linear algebra library, wraps Apache Commons Math for its most basic functionalities. NMatrix supports dense matrices containing either doubles or Ruby objects as the data type. The performance of JRuby with Apache Commons Maths is quite satisfactory (see below for performance comparisons) even without making use of JRuby threading capabilities.
I have also ported the mixed_models gem, which uses NMatrix heavily at its core, to JRuby. This gem allowed us to test NMatrix-JRuby with real-life data.
This blog post summarizes my work on the project with SciRuby, and reports the final status of the project.
The original GSoC proposal, plan and application can be found here. Until merging is complete, commits are available here.
I have benchmarked some of the NMatrix functionalities. The following plots compare the performance between NMatrix-JRuby, NMatrix-MRI, and NMatrix-MRI using LAPACK/ATLAS libraries. (Note: MRI refers to the reference implementation of Ruby, for those who are new.)
Notes:
For two-dimensional matrices, NMatrix-JRuby is currently slower than NMatrix-MRI for matrix multiplication and matrix decomposition functionalities (calculating determinant and factoring a matrix). NMatrix-JRuby is faster than NMatrix-MRI for other functionalities of a two-dimensional matrix — like addition, subtraction, trigonometric operations, etc.
NMatrix-JRuby is a clear winner when we are working with matrices of arbitrary dimensions.
The major components of an NMatrix
are shape, elements, dtype and
stype. When initialized, the dense type stores the elements as a one-dimensional
array; in the JRuby port, the ArrayRealVector
class is used to store
the elements.
@s
stores elements, @shape
stores the shape of the matrix, while
@dtype
and @stype
store the data type and storage type
respectively. Currently, I have nmatrix-jruby implemented only for
:float64
(double) and Ruby :object
data types.
NMatrix-MRI uses struct
as a type
to store dim
, shape
, offset
, count
, src
of an NMatrix. ALLOC
and xfree
are used to wrap the NMatrix attributes to C structs
and release the unrequired memory.
Implementing slicing was the toughest part of NMatrix-JRuby
implementation. NMatrix@s
stores the elements of a matrix as a
one-dimensional array. The elements along any dimension are accessed with the
help of the stride. NMatrix#get_stride
calculates the stride with
the help of the dimension and shape and returns an Array.
1 2 3 4 5 6 7 8 9 10 |
|
NMatrix#[]
and NMatrix#[]=
are thus able to read and write the
elements of a matrix. NMatrix#MRI uses the @s
object which stores
the stride when the nmatrix is initialized.
NMatrix#[]
calls the #xslice
operator which calls #get_slice
operator that use the stride to determine whether we are accessing a
single element or multiple elements. If there are multiple elements,
#dense_storage_get
returns an NMatrix object with the elements along
the dimension.
NMatrix-MRI differs from NMatrix-JRuby implementation as it makes sure that memory is properly utilized as the memory needs to be properly garbage collected.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
NMatrix#[]=
calls the #dense_storage_set
operator which calls
#get_slice
operator that use the stride to find out whether we are
accessing a single element or multiple elements. If there are
multiple elements #set_slice
recursively sets the elements of the
matrix then returns an NMatrix object with the elements along the
dimension.
All the relevant code for slicing can be found here.
NMatrix-MRI uses the C code for enumerating the elements of a matrix. Just as with slicing, the NMatrix-JRuby uses pure Ruby code in place of the C code. Currently, all the enumerators for dense matrices with real data-type have been implemented and are properly functional. Enumerators for objects have not yet been implemented.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
Linear algebra is mostly about two-dimensional matrices. In NMatrix,
when performing calculations in a two-dimensional matrix, a one-dimensional array
is converted to a two-dimensional matrix. A two-dimensional matrix is
stored in the JRuby implementation as a BlockRealMatrix
or
Array2DRowRealMatrix
. Each has its own advantages.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Memory Usage and Garbage Collection: A scientific library is memory intensive and hence, every step counts. The JRuby interpreter doesn’t need to dynamically guess the data type and uses less memory, typically around one-tenth of it. If the memory is properly utilized, when the GC kicks in, the GC has to clear less used memory space.
Speed: Using java method greatly improves the speed — by around 1000 times, when compared to using the Ruby method.
All the operators from NMatrix-MRI have been implemented except modulus. The binary operators were easily implemented through Commons Math API and Java Math API.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Unary Operators (Trigonometric, Exponentiation and Log operators) have been implemented using #mapToSelf
method that takes a Univariate function
as an argument. #mapToSelf
maps every element of ArrayRealVector object to the Univariate function
, that is passed to it and returns self
object.
1 2 3 4 5 |
|
NMatrix#method(arg) has been implemented using bivariate functions provided by Commons Math API and Java Math API.
1 2 3 4 5 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
NMatrix-MRI relies on LAPACK and ATLAS for matrix decomposition and
solving functionalities. Apache Commons Math provides a different set
of API for decomposing a matrix and solving an equation. For example,
#potrf
and other LAPACK specific functions have not been implemented
as they are not required at all.
Calculating determinant in NMatrix is tricky where a matrix is reduced either to a lower or upper matrix and the diagonal elements of the matrix are multiplied to get the result. Also, the correct sign of the result (whether positive or negative) is taken into account while calculating the determinant. However, NMatrix-JRuby uses Commons Math API to calculate the determinant.
1 2 3 4 5 6 7 |
|
Given below is code that shows how Cholesky decomposition has been implemented by using Commons Math API.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Similarly, LU Decomposition and QR factorization have been implemented.
NMatrix#solve
The solve method currently uses LU and Cholesky decomposition.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
NMatrix#matrix_solve
Suppose we need to solve a system of linear equations:
AX = B
where A is an m×n matrix, B and X are n×p matrices, we need to solve this equation by iterating through B.
NMatrix-MRI implements this functionality using NMatrix::BLAS::cblas_trsm
method. However, for NMatrix-JRuby, NMatrix#matrix_solve
is the analogous method used.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Currently, Hessenberg transformation for NMatrix-JRuby has not been implemented.
I have tried implementing float dtypes using FloatMatrix
class
provide by jblas. jblas was used instead of Commons Math as the
latter uses Field Elements
for Floats and it had some issues
with Reflection
and Type Erasure
.
However, using jblas resulted in errors due to precision.
To minimise conflict with the MRI codebase all the JRuby front end
code has been placed in the /lib/nmatrix/jruby
directory. lib/nmatrix/nmatrix.rb
decides whether to load
nmatrix.so
or nmatrix_jruby.rb
after detecting the Ruby platform.
The added advantage is that the Ruby interpreter must not decide which function to call at run-time. The impact on performance can be seen when programs which intensively use NMatrix for linear algebraic computations (e.g., mixed_models) are run.
After the port; this is the final report that summarizes the number of tests that successfully pass:
Spec file | Total Tests | Success | Failure | Pending | |
---|---|---|---|---|---|
00_nmatrix_spec | 188 | 139 | 43 | 6 | |
01_enum_spec | 17 | 8 | 09 | 0 | |
02_slice_spec | 144 | 116 | 28 | 0 | |
03_nmatrix_monkeys_spec | 12 | 11 | 01 | 0 | |
elementwise_spec | 38 | 21 | 17 | 0 | |
homogeneous_spec.rb | 07 | 06 | 01 | 0 | |
math_spec | 737 | 541 | 196 | 0 | |
shortcuts_spec | 81 | 57 | 24 | 0 | |
stat_spec | 72 | 40 | 32 | 0 | |
slice_set_spec | 6 | 2 | 04 | 0 |
floor
, ceil
, and round
methods.Spec file | Total Test | Success | Failure | Pending | |
---|---|---|---|---|---|
Deviance_spec | 04 | 04 | 0 | 0 | |
LMM_spec | 195 | 195 | 0 | 0 | |
LMM_categorical_data_spec.rb | 48 | 45 | 3 | 0 | |
LMMFormula_spec.rb | 05 | 05 | 0 | 0 | |
LMM_interaction_effects_spec.rb | 82 | 82 | 0 | 0 | |
LMM_nested_effects_spec.rb | 40 | 40 | 0 | 0 | |
matrix_methods_spec.rb | 52 | 52 | 0 | 0 | |
ModelSpecification_spec.rb | 07 | 07 | 0 | 0 | |
NelderMeadWithConstraints_spec.rb | 08 | 08 | 0 | 0 |
NMatrix on JRuby offers comparable speeds to MRI. For specific computations it will be possible to leverage the threading support of JRuby and speed up things using multiple cores.
Adding new functionality to NMatrix-JRuby will be easy from here. Personally, I am interested to add OpenCL support to leverage the GPU computational capacity available on most machines today.
The main goal of this project was to to gain from the performance JRuby offers, and bring a unified interface for linear algebra between MRI and JRuby.
By the end of GSoC, I have been able to successfully create a linear algebra library, NMatrix for JRuby users, which they can easily run on their machines — unless they want to use complex numbers, at least for now.
I have mixed_models gem simultaneously ported to JRuby. Even here, NMatrix-JRuby is very close to NMatrix-MRI, considering the performance .
I would like to express my sincere gratitude to my mentor Pjotr Prins for the continuous support through the summers, and for his patience, motivation, enthusiasm, and immense knowledge. I could not have imagined having a better advisor and mentor, for this project.
I am very grateful to Google and the Ruby Science Foundation for this golden opportunity.
I am very thankful to Charles Nutter, John Woods, Sameer Deshmukh, Kenta Murata and Alexej Gossmann, who mentored me through the project. It has been a great learning experience.
I thank my fellow GSoC participants Rajith, Lokesh and Gaurav who helped me with certain aspects of my project.
]]>Working Repository : GitHub Repo
List of Commits : List of commits to the above repository
Example iRuby Notebooks : A bunch of iRuby Notebook examples of the Ruby API
Firstly, I must admit that I was unable to meet all the goals of my proposal. In retrospect, perhaps it was a bit too ambitious for my skill level at the time, and I undoubtedly spent a lot of time learning, about Ruby-C extensions, the SPICE Toolkit, and good Ruby code and API.
General software requirements such as a shippable gem with an install script that downloads the correct binary dependencies and a dataset fetcher were among the targets in my proposal that I could not meet. If you’d like to read the full proposal, you can have a look at it here.
With that said, I am happy with what I did complete, and along with most of the functions in the proposal list being ported successfuly, I have added 3 Ruby classes to provide a better abstracted experience while using the Ephemerides subsystem of SPICE. Given the vastness of SPICE itself, this seems like meagre coverage, but it is a stepping stone that I (and I hope others) would build upon.
Please follow the blog links below to read the posts concerning the Ruby classes, the posts basically demonstrate the simple API that can be used for various tasks involving ephemerides. It would help if you followed the order of the links below as they build on top of each other.
KernelPool :- A Singleton class to handle the loading and unloading of SPICE Data (Kernels)
Time :- A class that references ephemeris time and has flexible construction functions
Body :- A class that represents a body in space whose motion can be observed with respect to another body.
I spent majority of the post mid-term period on the latter two classes, while most of the time before was spent on writing C extensions to port the SPICE API to Ruby. (and learning good spec
manners, something that was not that quick to grow on me but towards the end got ingrained into me)
I ran into a lot of bugs, learnt more about compiler flags and other building options, and it gave me surprisingly good exposure to low level language trivia for a high level Ruby project. Honestly, I can type SpiceRub
faster than most words on my keyboard now =)
Things that I’ll tackle (after a short break) include :-
1) Installation Integrity so that gem install
does everything needed for installation.
2) Complete Documentation
3) The test coverage of SpiceRub::Time is currently lacking because I changed the API at the last minute, but as most of these functions wrap around Native functions that have already been tested, this won’t be too high a priority, but it will be done.
4) Kernel Fetcher Script : Having to crawl the web for relevant Kernels was a massive headache during this project, and adding something that directly refers to NAIF’s FTP servers would be a neat addition.
5) Expand the API : There is a lot of SPICE concept I am not well versed with, and there are a bunch of functions ported that would work very well with a better API. (Most immediate task I can see is making a better API for the Geometry Finder subsystem, of which many functions have already been ported in /ext/spice_rub/spice_geometry.c
)
I’d like to thank my mentors for this project, Dr. John Woods, Shaun Stewart, and Victor Shepelev for their guidance and knowledge. John and Shaun with their expertise in Astro research and Victor’s vast knowledge about professional Ruby code has helped keep this whole ship together, and I’m looking forward to sailing it a few more journeys.
Also, John made NMatrix which is sort of the backbone for a lot of SpiceRub’s functionality.
Finally, I would like to thank Andrew Annex who wrote SpicePy, and Philip Rasch who wrote spiceminer , two Python wrappers for the SPICE Toolkit. SpicePy has a high port and test coverage (I lifted a lot of tests from here that weren’t available in the SPICE documentation and SpiceMiner had an OOP style API which I referred to while designing the Body
class.
It’s been an incredible summer, thank you for reading :)
]]>My GSoC project is the Ruby gem mixed_models. Mixed models are statistical models which predict the value of a response variable as a result of fixed and random effects. The gem in its current version can be used to fit statistical linear mixed models and perform statistical inference on the model parameters as well as to predict future observations. A number of tutorials/examples in IRuby notebook format are accessible from the mixed_models
github repository.
Linear mixed models are implemented in the class LMM
. The constructor method LMM#initialize
provides a flexible model specification interface, where an arbitrary covariance structure of the random effects terms can be passed as a Proc
or a block.
A convenient user-friendly interface to the basic model fitting algorithm is LMM#from_formula
, which uses the formula language of the R mixed models package lme4
for model specification. With the #from_formula
method, the user can conveniently fit models with categorical predictor variables, interaction fixed or random effects, as well as multiple crossed or nested random effects, all with just one line of code.
Examples are given in the sections below.
The parameter estimation in LMM#initialize
is largely based on the approach developed by the authors of the R mixed models package lme4
, which is delineated in the lme4
vignette. I have tried to make the code of the model fitting algorithm in LMM#initialize
easy to read, especially compared to the corresponding implementation in lme4
.
The lme4
code is largely written in C++, which is integrated in R via the packages Rcpp
and RcppEigen
. It uses CHOLMOD code for various sparse matrix tricks, and it involves passing pointers to C++ object to R (and vice versa) many times, and passing different R environments from function to function. All this makes the lme4
code rather hard to read. Even Douglas Bates, the main developer of lme4
, admits that “The end result is confusing (my fault entirely) and fragile”, because of all the utilized performance improvements. I have analyzed the lme4
code in three blog posts (part 1, part 2 and part 3) before starting to work on my gem mixed_models
.
The method LMM#initialize
is written in a more functional style, which makes the code shorter and (I find) easier to follow. All matrix calculations are performed using the gem nmatrix
, which has a quite intuitive syntax and contributes to the overall code readability as well.
The Ruby gem loses with respect to memory consumption and speed in comparison to lme4
, because it is written in pure Ruby and does not utilize any sparse matrix tricks. However, for the same reasons the mixed_models
code is much shorter and easier to read than lme4
. Moreover, the linear mixed model formulation in mixed_models
is a little bit more general, because it does not assume that the random effects covariance matrix is sparse. More about the implementation of LMM#initialize
can be found in this blog post.
Popular existing software packages for mixed models include the R package lme4
(which is arguably the standard software for linear mixed models), the R package nlme
(an older package developed by the same author as lme4
, still widely used), Python’s statmodels
, and the Julia package MixedModels.jl
.
Below, I give a couple of examples illustrating some of the capabilities of mixed_models
and explore how it compares to the alternatives.
As an example, we use data from the UCI machine learning repository, which originate from blog posts from various sources in 2010-2012, in order to model (the logarithm of) the number of comments that a blog post receives. The linear predictors are the text length, the log-transform of the average number of comments at the hosting website, the average number of trackbacks at the hosting website, and the parent blog posts. We assume a random effect on the number of comments due to the day of the week on which the blog post was published. In mixed_models
this model can be fit with
1 2 |
|
and we can display some information about the estimated fixed effects with
1
|
|
which produces the following output:
1 2 3 4 5 6 |
|
We can also display the estimated random effects coefficients and the random effects standard deviation,
1 2 3 4 |
|
which produces
1 2 3 4 5 6 7 |
|
Interestingly, the estimates of the random effects coefficients and standard deviation are all zero! That is, we have a singular fit. Thus, our results imply that the day of the week on which a blog post is published has no effect on the number of comments that the blog post will receive.
It is worth pointing out that such a model fit with a singular covariance matrix is problematic with the current version of Python’s statmodels
(described as “numerically challenging” in the documentation) and the R package nlme
(“Singular covariance matrices correspond to infinite parameter values”, a mailing list reply by Douglas Bates, the author of nlme
). However, mixed_models
, lme4
and MixedModels.jl
can handle singular fits without problems.
In fact, like mixed_models
above, lme4
estimates the random effects coefficients and standard deviation to be zero, as we can see from the following R output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Unfortunately, mixed_models
is rather slow when applied to such a large data set (blog_data
is a data frame of size 22435×8), especially when compared to lme4
which uses many sparse matrix tricks and is mostly written in C++ (integrated in R via Rcpp
) to speed up computation. The difference in performance between mixed_models
and lme4
is on the order of hours for large data, and Julia’s MixedModels.jl
promises to be even faster than lme4
. However, there is no noticeable difference in performance speed for smaller data sets.
The full data analysis of the blog post data can be found in this IRuby notebook.
Often, the experimental design or the data suggests a linear mixed model whose random effects are associated with multiple grouping factors. A specification of multiple random effects terms which correspond to multiple grouping factors is often referred to as crossed random effect, or nested random effects if the corresponding grouping factors are nested in each other.
A good reference on such models is Chapter 2 of Douglas Bates’ lme4
book.
Like lme4
, mixed_models
is particularly well suited for models with crossed or nested random effects. The current release of statmodels
, however, does not support crossed or nested random effects (according to the documentation).
As an example we fit a linear mixed model with nested random effects to a data frame with 100 rows, of the form:
1 2 3 4 5 6 7 |
|
We consider the following model:
y
to be the response and x
its predictor.b
to be nested within the factor a
.a
; that is, a different (random) intercept term for each level of a
.b
which is nested in a
; that is, different (random) intercept for each combination of levels of a
and b
.That is, mathematically the model can be expressed as
1
|
|
where gamma(a) ~ N(0, phi**2)
and delta(a,b) ~ N(0, psi**2)
are normally distributed random variables which assume different realizations for different values of a
and b
, and where epsilon
is a random Gaussian noise term with variance sigma**2
. The goal is to estimate the parameters beta_0
, beta_1
, phi
, psi
and sigma
.
We fit this model in mixed_models
, and display the estimated random effects correlation structure with
1 2 3 |
|
which produces the output
1 2 3 |
|
The correlation between the factor a
and the nested random effect a_and_b
is denoted as nil
, because the random effects in the model at hand are assumed to be independent.
An advantage of mixed_models
over some other tools is the simplicity with which p-values and confidence intervals for the parameter estimates can be calculated using a multitude of available methods. Such methods include a likelihood ratio test implementation, multiple bootstrap based methods (which run in parallel by default), and methods based on the Wald Z statistic.
We can compute five types of 95% confidence intervals for the fixed effects coefficients with the following line of code:
1
|
|
which yields the result
1 2 3 4 5 6 |
|
For example, we see here that the intercept term is likely not significantly different from zero. We could proceed now by performing hypotheses tests using #fix_ef_p
or #likelihood_ratio_test
, or by refitting a model without an intercept using #drop_fix_ef
.
We can also test the nested random effect for significance, in order to decide whether we should drop that term from the model to reduce model complexity. We can use a bootstrap based version of likelihood ratio test as follows.
1 2 |
|
We get a p-value of 9.99e-4, suggesting that we probably should keep the term (1|a:b)
in the model formula.
Another advantage of mixed_models
against comparable tools is the ease of fitting models with arbitrary covariance structures of the random effects, which are not covered by the formula interface of lme4
. This can be done in a user-friendly manner by providing a block or a Proc
to the LMM
constructor. This unique feature of the Ruby language makes the implementation and usage of the method incredibly convenient. A danger of allowing for arbitrary covariance structures is, of course, that such a flexibility gives the user the freedom to specify degenerate and computationally unstable models.
As an example we look at an application to genetics, namely to SNP data (single-nucleotide polymorphism) with known pedigree structures (family relationships of the subjects). The family information is prior knowledge that we can model in the random effects of a linear mixed effects model.
We model the quantitative trait y
(a vector of length 1200) as
1
|
|
where X
is a 1200 x 130
matrix containing the genotypes (i.e. 130 SNPs for each of the 1200 subjects); epsilon
is a vector of independent random noise terms with variances equal to sigma**2
; beta
is a vector of unknown fixed effects coefficients measuring the contribution of each SNP to the quantitative trait y
; and b
is a vector of random effects.
If we denote the kinship matrix by K
, then we can express the probability distribution of b
as b ~ N(0, delta**2 * 2 * K)
, where we multiply K
by 2
because the diagonal of K
is constant 0.5
, and where delta**2
is a unknown scaling factor.
The goal is to estimate the unknown parameters beta
, sigma
, and delta
, and to determine which of the fixed effects coefficients are significantly different from 0 (i.e. which SNPs are possibly causing the variability in the trait y
).
In order to specify the covariance structure of the random effects, we need to pass a block or Proc
that produces the upper triangular Cholesky factor of the covariance matrix of the random effects from an input Array. In this example, that would be the multiplication of the prior known Cholesky factor of the kinship matrix with a scaling factor.
Having all the model matrices and vectors, we compute the Cholesky factor of the kinship matrix and fit the model with
1 2 3 4 5 6 7 8 |
|
Then we can use the available hypotheses test and confidence interval methods to determine which SNPs are significant predictors of the quantitative trait. Out of the 130 SNPs in the model, we find 24 to be significant as linear predictors.
See this blog post for a full analysis of this data with mixed_models
.
Writing the formula language interpretation code used by LMM#from_formula
from scratch was not easy. Much of the code can be reorganized to be easier to read and to use in other projects. Possibly, the formula interface should be separated out, similar to how it is done with the Python package patsy. Also, some shortcut symbols (namely *
, /
, and ||
) in the model specification formula language are currently not implemented.
I plan to add linear mixed models for high-dimensional data (i.e. more predictors than observations) to mixed_models
, because that work would be in line with my current PhD research.
I plan to add generalized linear mixed models capabilities to mixed_models
, which can be used to fit mixed models to discrete data (such as binary or count data).
I want to thank Google and the Ruby Science Foundation for giving me this excellent opportunity! I especially want to thank Pjotr Prins who was my mentor for the project for much helpful advice and suggestions as well as his prompt responses to any of my concerns. I also want to thank my fellow GSoC participants Will, Ivan, and Sameer for their help with certain aspects of my project.
]]>nmatrix
gem to optional plugin gems. NMatrix is a Ruby library for linear algebra,
used by many other projects.
In addition to the code that was part of
NMatrix proper, NMatrix previously required the ATLAS library, which
implemented fast versions of common matrix operations like multiplication
and inversion, as well as more advanced operations like eigenvalue
decomposition and Cholesky decomposition.
There were two separate but related motivations for my project. The first was to simplify the NMatrix installation process. ATLAS can be difficult to install, so the installation process for NMatrix was complicated, especially on OS X, and may have discouraged people from using NMatrix. The second motivation was that by separating out the ATLAS code from the main NMatrix code, it would be easier to add new linear algebra backends which provide similar features. Indeed, I implemented a second backend this summer.
The end result of my summer’s work:
nmatrix
gem does not depend on any external linear matrix
libraries. It provides non-optimized implementations of common matrix
operations.nmatrix-atlas
gem, so that
those who are only interested in the core functionality are not required to
install ATLAS. nmatrix-atlas
provides optimized implementations of common matrix
operations, as well as advanced functions not available in nmatrix
.
I wrote a blog post describing the setup for releasing multiple gems from the same repository, which this required.nmatrix-lapacke
, which provides the same features as
nmatrix-atlas
, but instead of depending specifically on the ATLAS
library, requires any generic LAPACK and
BLAS
implementation. This should be easier to use for many users as they may
already have LAPACK installed (it comes pre-installed with OS X and is
commonly used in Linux systems), but not ATLAS.nmatrix
gem. Compare the new installation instructions
to the old ones.The one deviation from my original proposal was that I originally intended to remove
all the ATLAS code and release only the nmatrix-lapacke
plugin, so that we
would only have one interface to the advanced linear algebra functions, but I
decided to keep the ATLAS code, since the nmatrix-lapacke
code is new and
has not had a chance to be thoroughly tested.
1 2 3 4 5 |
|
1 2 3 4 5 6 |
|
For advanced functions not provided by the core nmatrix
gem, for example
gesvd
, nmatrix-atlas
and nmatrix-lapacke
provide a common interface:
1 2 3 4 5 |
|
1 2 3 4 5 6 |
|
If the developer wants to use an advanced feature, but does not care
whether the user is using nmatrix-atlas
or nmatrix-lapacke
, they can require nmatrix/lapack_plugin
, which will
require whichever of the two is available, instead of being forced to
choose between the two.
As a fun test of the new gems, I did a very simple benchmark, just
testing how long it took to invert a
1500-by-1500 matrix in place using NMatix#invert!
:
nmatrix
(no external libraries): 3.67snmatrix-atlas
: 0.96snmatrix-lapacke
with ATLAS: 0.99snmatrix-lapacke
with OpenBLAS (multithreading enabled): 0.39snmatrix-lapacke
with reference implementations of LAPACK and BLAS: 3.72sThis is not supposed to be a thorough or realistic benchmark (performance will depend on your system, on how you built the libraries, and on the exact functions that you use), but there are still a few interesting conclusions we can draw from it:
nmatrix-atlas
and nmatrix-lapacke
(this means we could consider deprecating
the nmatix-atlas
gem).Overall, my summer has been productive. I implemented everything that I proposed and feedback from testers so far has been positive. I plan to stay involved with NMatrix, especially to follow up on any issues related to my changes. Although I won’t be a student next summer, I would certainly consider participating in Google Summer of Code in the future as a mentor. I’d like to thank my mentor John Woods and the rest of the SciRuby community for support and feedback throughout the summer.
]]>This summer I’ve been participating in Google Summer of Code with GnuplotRB project (plotting tool for Ruby users based on Gnuplot) for SciRuby. GSoC is almost over and I’m releasing v0.3.1 of GnuplotRB as a gem. In this blog post I want to introduce the gem and highlight some of its capabilities.
There are several existing plotting tools for Ruby such as Nyaplot, Plotrb, Rubyvis and Gnuplot gem. However they are not designed for large datasets and have fewer plotting styles and options than Gnuplot. Gnuplot gem was developed long ago and nowadays consists mostly of hacks and does not support modern Gnuplot features such as multiplot.
Therefore my goal was to develop new gem for Gnuplot which would allow full use of its features in Ruby. I was inspired to build an easy-to-use interface for the most commonly used features of Gnuplot and allow users to customize their plots with Gnuplot options as easily as possible in Rubyesque way.
The main feature of every plotting tool is its ability to plot graphs. GnuplotRB allows you
to plot both mathematical formula and (huge) sets of data. GnuplotRB supports plotting
2D graphs (GnuplotRB::Plot
class) in Cartesian/parametric/polar coordinates and 3D
graphs (GnuplotRB::Splot
class) — in Cartesian/cylindrical/spherical coordinates.
There are vast of plotting styles supported by GnuplotRB:
points
lines
histograms
boxerrorbars
circles
boxes
filledcurves
vectors
heatmap
Plot examples:
For code examples please see the repository README, notebooks and the examples folder.
GnuplotRB::Multiplot
allows users to place several plots on a single layout and output
them at once (e.g., to a PNG file).
Multiplot notebook.
Here is a multiplot example (taken from Sameer’s notebook):
GnuplotRB may output any plot to gif file but GnuplotRB::Animation
allows
to make this gif animated. It takes several Plot
or Splot
objects just as
multiplot does and outputs them one-by-one as frames of gif animation.
Animation notebook.
Although the main GnuplotRB’s purpose is to provide you with swift, robust and
easy-to-use plotting tool, it also offers a Fit
module that contains several
methods for fitting given data with a function. See examples in Fit notebook.
GnuplotRB plots may be embedded into iRuby notebooks as JPEG/PNG/SVG
images, as ASCII art or GIF animations (Animation
class). This functionality
explained in a special iRuby notebook.
To link GnuplotRB with other SciRuby tools I implemented plot
creation from data given in Daru containers (Daru::Dataframe
and Daru::Vector
).
One can use daru
gem in order to work with statistical SciRuby gems
and plotting with GnuplotRB. Notebooks with examples: 1, 2.
You can pass to Plot (or Splot or Dataset) constructor data in the following forms:
'sin(x)'
)'points.data'
)#to_gnuplot_points
Array
Daru::Dataframe
Daru::Vector
See examples in notebooks.
My project was to write the Ruby extensions for the library SymEngine and come up with a Ruby-ish interface, after which we can use the features of SymEngine from Ruby.
SymEngine is a library for symbolic computation in C++. You may ask, why SymEngine? There are other CASs that I know of. This question was indeed asked. At the beginning, the idea was to use ruby wrappers for sage (a mathematics software system) which uses Pynac, an interface to GiNaC (another CAS). As it turns out from the benchmarks, SymEngine is much faster than Pynac. What about directly wrapping GiNaC? SymEngine is also a bit faster than GiNaC.
The motivation for SymEngine itself is to develop it once in C++ and then use it from other languages rather than doing the same thing all over again for each language that it is required in. In particular, a long term goal is to make Sage as well as SymPy use it by default, thus unifying the Python CAS communities. The goal of implementing the Ruby wrappers is to provide a CAS for the Ruby community.
There are times when we might need a symbolic computation library. Here is an incomplete list of some of the situations:
With that said, a symbolic manipulation library is indispensable for scientists and students. Ruby has gained a great deal of popularity over the years, and a symbolic manipulation library gem like this project in Ruby might prove to be the foundation for a computer algebra system in Ruby. With many efforts like these, Ruby might become the first choice for academicians given how easy it is to code your logic in Ruby.
To install, please follow the compile instructions given in the README. After you are done, I would suggest to test the extensions. To run the test suite execute rspec spec
on the command line, from the symengine/ruby
dir.
The gem is still in alpha release. Please help us out by reporting any issues in the repo issue tracker.
Currently, the following features are available in the gem:
- Construct expressions out of variables (mathematical).
- Simplify the expressions.
- Carry out arithmetic operations like +
, -
, *
, /
, **
with the variables and expressions.
- Extract arguments or variables from an expression.
- Differentiate an expression with respect to another.
- Substitute variables with other expressions.
Features that will soon be ported to the SymEngine gem - Functions, including trigonometric, hyperbolic and some special functions. - Matrices, and their operations. - Basic number-theoretic functions.
I have developed a few IRuby notebooks that demonstrate the use of the new SymEngine module in ruby.
Below is an example taken from the notebooks.
SymEngine is a module in the extensions, and the classes are a part of it. So first you fire up the interpreter or an IRuby notebook and load the file:
1 2 |
|
Go ahead and try a function:
1 2 3 4 5 6 |
|
or create a variable:
1 2 |
|
This shows that we have successfully loaded the module.
Just like there are variables like x, y, and z in a mathematical expression or equation, we have SymEngine::Symbol
in SymEngine to represent them. To use a variable, first we need to make a SymEngine::Symbol
object with the string we are going to represent the variable with.:
1 2 3 4 5 6 7 |
|
Then we can construct expressions out of them:
1 2 3 |
|
In SymEngine, every object is an instance of Basic or its subclasses. So, even an instance of SymEngine::Symbol
is a Basic object.:
1 2 3 4 5 |
|
Now that we have an expression, we would like to see it’s expanded form using #expand
:
1 2 3 |
|
Or check if two expressions are same:
1 2 |
|
But e
and f
are not equal since they are only mathematically equal, not structurally:
1 2 |
|
Let us suppose you want to know what variables/symbols your expression has. You can do that with the #free_symbols
method, which returns a set of the symbols that are in the expression.:
1 2 |
|
Let us use #map
method to see the elements of the Set
.:
1 2 |
|
#args
returns the terms of the expression,:
1 2 |
|
or if it is a single term it breaks down the elements:
1 2 |
|
You can make objects of class SymEngine::Integer
. It’s like regular Integer
in ruby kernel, except it can do all the operations a Basic
object can — such as arithmetic operations, etc.:
1 2 3 4 5 |
|
Additionally, it can support numbers of arbitrarily large length.
1 2 |
|
You can also make objects of class SymEngine::Rational
which is the SymEngine counterpart for Rationals
in Ruby.:
1 2 3 4 |
|
Like any other Basic
object arithmetic operations can be done on this rational type too.:
1 2 |
|
You need not create an instance of SymEngine::Integer
or SymEngine::Rational
, every time you want to use them in an expression that uses many Integer
s. Let us say you already have Integer
/Rational
object. Even then you can use them without having to create a new SymEngine
object.:
1 2 3 |
|
As you can see, ruby kernel Integer
s and Rational
s interoperate seamlessly with the SymEngine
objects.
1 2 |
|
In the rest of the post, I would like to summarise my work and what I learned as a participant of Google Summer of Code 2015.
I am a newbie when it comes to Ruby, and it took me a while to setup the gem and configure files for the building of extensions.
I faced a lot of problem in the early stages, when I was trying to build the extensions. Ondrej, my mentor, and Isuru, a fellow GSoC student, helped me a lot. There were many C flags that were being reported as missing. Some flags cmake
added by default but extconf.rb
didn’t, the same one that was required to be added to build it as a shared library. I am still confused about the details, some of which are explored in greater detail in my personal blog. Finally, the library had to be built as a dynamic one. The problem of missing C flags was resolved later by hooking the process to cmake
rather than mkmf
.
Many LoadError
s popped up, but were eventually solved. Ivan helped a lot in debugging the errors. In the end, it turned out to be a simple file missing in the gemspec, that was not being installed.
One of our aims during developing this was to get rid of unessential dependencies. The ones we already had the tools for. Like later the file extconf.rb
, that is used to generate Makefile for the extension was removed, because that could also be done by cmake
. Flags were added to cmake
for building the Ruby extensions, like the flag -DWITH_RUBY=yes
. The Makefile
then generates the library symengine.so
in the directory lib/symengine
.Along with extconf.rb
, the file extconf.h
was also gone. Along these lines, the dependency on rake
was also removed, and with that the Rakefile
. Any task automation will most probably be done in python. So, the Rake::ExtensionTask
was done by cmake
and the Rake::GemPackageTask
was replaced by the manual method of gem build symengine.gemspec
and gem install symengine-0.0.0.gem
Not many projects have travis-ci setup for multiple languages. Not even the tutorials had clearly mentioned about setting up for multiple languages. But I did know about one of them, which is Shogun, the machine-learning toolbox. I referred to their .travis.yml
and setup it up. If something like this wouldn’t have worked the plan was to manually install the required version of ruby and then execute the shell commands.
Finally, I was able to successfully build the extensions, link the extensions with the SymEngine library, load the ruby-extension library in the interpreter and successfully instantiate an object of type Basic
.
At this time, the way inheritance works(like the sequence of formation and destruction of objects of a class that had a superclass) with the Ruby C API, was confusing for all of us. I designed an experiment
to check what was actually happening. That cleared things out, and made the it easier to wrap things from now on. I also wrapped the Symbol
class during the course.
We had to design an ugly function to wrap vector in C. That led us to redesign the C interface. This approach had no reinterpret casting that was being done earlier. Each data structure had a type that was determined at compile time. For C, it was an opaque structure, while for C++ the opaque structure declared in the shared header file was implemented in the source file that had C++ data types. This blog post explains it further.
While trying to port the SymEngine classes, Integer
and Rational
, I had to port many methods in Basic
before that. I also replicated the rake
tasks in NMatrix, for detection of memory leaks, in form of bash scripts.
Since all objects in the Ruby C API are of the type CBasic
, we needed a function that would give us the typename during the runtime for the corresponding objects to be wrapped in ruby, as an object of the correct Class
in ruby. Since, this was achieved with enum
in C++, the same thing could be done in C too, with all the classes written manually again. But there was no guarantee for this to be consistent, if ever the features required to be wrapped for a new language, and also manually adding the class in all the enum list everytime a new class is added was prone to errors. So, to make this DRY, we automated this by sharing the list of enums. More details for the implementation can be found here.
To support interoperability with the builtin ruby types, I had to overload the methods in builtin classes earlier(this was not continued). Overriding all the existing binary operations for a ruby class to support SymEngine types, violated the open/closed principle. There was indeed another way, which is ‘Class Coercion’. It was suggested by Isuru. After that, SymEngine types could seamlessly interoperate between the ruby types.
After this, all the arithmetic operations had been successfully ported. Each Basic
object can now perform arithmetic operations with other Basic
object(sometimes even ruby objects like Integer
). The test file in python, that had all the corresponding test cases was ported to its RSpec counterpart.
Recently I completed porting the substitutions module to the extensions(#subs
). This feature has added a lot of convenience as now you can substitute a SymEngine::Symbol
with some other value in an expression and then #expand
to get the result.
Currently, I am working on porting the trigonometric functions in SymEngine to the extensions. This would first require to wrap the Function
class and then the TrigFunction
class in SymEngine.
I also have plans to integrate the ruby bindings for gmp
, mpfr
and mpc
libraries, that are already available as gems, with ruby bindings for our library. I have created an issue here. Feel free to drop any suggestions.
There is much scope for improvement in both the projects. For SymEngine, to support more features like polynomials and series-expansion in the near future, and improving the user interface and the exception handling for the extensions. In short, making the extensions more ruby-ish.
I am grateful to my mentor, Mr. Ondřej Čertík, the Ruby Science Foundation and the SymPy Organisation for the opportunity that they gave me and guiding me through the project, and my team-mates for helping me with the issues. I hope more people will contribute to the project and together we will give a nice symbolic manipulation gem to the Ruby community.
]]>The new features led to the inclusion of daru in many of SciRuby’s gems, which use daru’s data storage, access and indexing features for storing and carrying around data. Statsample, statsample-glm, statsample-timeseries, statsample-bivariate-extensions are all now compatible with daru and use Daru::Vector
and Daru::DataFrame
as their primary data structures. I also overhauled Daru’s plotting functionality, that interfaced with nyaplot for creating interactive plots directly from the data.
Also, new gems developed by other GSOC students, notably Ivan’s GnuplotRB gem and Alexej’s mixed_models gem both now accept data from daru data structures. Do see their repo pages for seeing interesting ways of using daru.
The work on daru is also proving to be quite useful for other people, which led to a talk/presentation at DeccanRubyConf 2015, which is one of the three major Ruby conferences in India. You can see the slides and notebooks presented at the talk here. Given the current interest in data analysis and the need for a viable solution in Ruby, I plan to take daru much further. Keep watching the repo for interesting updates :)
In the rest of this post I’ll elaborate on all the work done this summer.
Daru as a gem before GSOC was not exactly user friendly. There were many cases, particularly the iterators, that required some thinking before anybody used them. This is against the design philosophy of daru, or even Ruby general, where surprising programmers with ubiqtuos constructs is usually frowned down upon by the community. So the first thing that I did mainly concerned overhauling the daru’s many iterators for both Vector
and DataFrame
.
For example, the #map
iterator from Enumerable
returns an Array
no matter object you call it on. This was not the case before, where #map
would a Daru::Vector
or Daru::DataFrame
. This behaviour was changed, and now #map
returns an Array
. If you want a Vector
or a DataFrame
of the modified values, you should call #recode
on Vector
or DataFrame
.
Each of these iterators also accepts an optional argument, :row
or :vector
, which will define the axis over which iteration is supposed to be carried out. So now there are the #each
, #map
, #map!
, #recode
, #recode!
, #collect
, #collect_matrix
, #all?
, #any?
, #keep_vector_if
and #keep_row_if
. To iterate over elements along with their respective indexes (or labels), you can likewise use #each_row_with_index
, #each_vector_with_index
, #map_rows_with_index
, #map_vector_with_index
, #collect_rows_with_index
, #collect_vector_with_index
or #each_index
. I urge you to go over the docs of each of these methods to utilize the full power of daru.
Apart from the improvements to iterators there was also quite a bit of refactoring involved for many methods (courtesy Alexej). The refactoring of certain core methods has made daru much faster than previous versions.
The next (major) thing to do was making daru compatible with Statsample. This was very essential since statsample is very important tool for statistics in Ruby and it was using its own Vector
and Dataset
classes, which weren’t very robust as computation tools and very difficult to use when it came to cleaning or munging data. So I replaced statsample’s Vector and Dataset clases with Daru::Vector
and Daru::DataFrame
. It involved a significant amount of work on both statsample and daru — Statsample because many constructs had to be changed to make them compatible with daru, and daru because there was a lot of essential functionality in these classes that had to be ported to daru.
Porting code from statsample to daru improved daru significantly. There were a whole of statistics methods in statsample that were imported into daru and you can now use all them from daru. Statsample also works well with rubyvis, a great tool for visualization. You can now do that with daru as well.
Many new methods for reading and writing data to and from files were also added to daru. You can now read and write data to and from CSV, Excel, plain text files or even SQL databases.
In effect, daru is now completely compatible with Statsample (and all the other Statsample extensions). You can use daru data structures for storing data and pass them to statsample for performing computations. The biggest advantage of this approach is that the analysed data can be passed around to other scientific Ruby libraries (some of which listed above) that use daru as well. Since daru offers in-built functions to better ‘see’ your data, better visualization is possible.
See these blogs and notebooks for a complete overview of daru’s new features.
Also see the notebooks in the statsample README for using daru with statsample.
Most of time post the mid term submissions was spent in implementing the time series functions for daru.
I implemented a new index, the DateTimeIndex, which can used for indexing data on time stamps. It enables users to query data based on time stamps. Time stamps can either be specified with precise Ruby DateTime objects or can be specified as strings, which will lead to retrival of all the data falling under that time. For example specifying ‘2012’ returns all data that falls in the year 2012. See detailed usage of DateTimeIndex
and DateTime
in conjunction with other daru constructs in the daru README.
An essential utility in implementing DateTimeIndex
was DateOffset
, which is a new set of classes that offsets dates based on certain rules or business logic. It can advance or lag a Ruby DateTime
to the nearest day, or any day of the week, or the end or beginning of the month, etc. DateOffset
is an essential part of DateTimeIndex
and can also be used as a stand-alone utility for advancing/lagging DateTime
objects. This blog post elaborates more on the nuances of DateOffset
and its usage.
The last thing done during the post mid term was complete compatibility with Ankur Goel’s statsample-timeseries, which was created by during GSOC 2013. Statsample-timeseries is a comprehensive suite offering various functions for statistical analysis of time sries data. It now works with daru containers and can be used for statistical analysis of data indexed on Daru::DateTimeIndex
. See some use cases in the README.
I’d like to conclude by thanking all the people directly and indirectly involved in making this project a success - My mentor Carlos for his help and support throughout the summer, Ivan, Alexej and Will for their support and feedback in various stages of developing daru. Also a big thank you to all the SciRuby maintainers for making this happen!
]]>