This intro class is designed to train students to efficiently manage, collect, explore, analyze, and communicate in a legal profession that is increasingly being driven by data.
Our goal is to imbue our students with the capability to understand the process of extracting actionable knowledge from data, to distinguish themselves in legal proceedings involving data or analysis, and assist in firm and in-house management, including billing, case forecasting, process improvement, resource management, and financial operations.
This course assumes prior knowledge of statistics, such as might be obtained in Quantitative Methods for Lawyers or through advanced undergraduate curricula. This class is not for everyone; for many, it will prove to be challenging. With that warning, we encourage you to consider your interest and career aspirations against the unique experience and value of this class. To our knowledge, this is the only existing class that teaches these quantitative skills to lawyers and law students.
Still in beta – we will be adding much more to this site as we move forward!
Here is an introductory slide deck from “Legal Analytics” which is a course that Mike Bommarito and I are teaching this semester. Relevant legal applications include predictive coding in e-discovery (i.e. classification), early case assessment and overall case prediction, pricing and staff forecasting, prediction of judicial behavior, etc.
As I have written in my recent article in Emory Law Journal – we are moving into an era of data driven law practice. This course is a direct response to demands from relevant industry stakeholders. For a large number of prediction tasks … humans + machines > humans or machines working alone.
We believe this is the first ever Machine Learning Course offered to law students and it our goal to help develop the first wave of human capital trained to thrive as this this new data driven era takes hold. Richard Susskind likes to highlight this famous quote from Wayne Gretzky … “A good hockey player plays where the puck is. A great hockey player plays where the puck is going to be.”
While its performance is sometimes problematic for some extremely large data problems, R (with R studio frontend) is the data science language du jour for many small to medium data problems. Among other things, R is great because it is open source, hyper customizable with thousands of packages available to be loaded for a specific problem.
While Python and SQL are also important parts of the overall data science toolkit, we use R as our preferred language in both Quantitative Methods for Lawyers (3 credits) as well as in our Legal Analytics course (2 credits). We have found that students who are diligent can make amazing strides in a relatively short amount of time. For example, see this final project by Pat Ellis from last year’s course.
Here are some introductory resources that we have developed to get folks started: Loading R and R Studio
R Boot Camp – Part 1 – Loading Datasets and Basic Data Exploration
Data Cleaning and Additional Resources
R Boot Camp – Part 2 – Statistical Tests Using R
Basic Data Visualization in R
Scatter Plots, Covariance, Correlation Using R
Intro to Regression Analysis Using R
Over the balance of the 2014-2015 academic year, Mike and I will be introducing a variety of new things to the quantitative sequence including dplyR, etc. … more to come …
Tomorrow I will presenting initial results for my new project called ‘Law on the Market’ (co-authored with Jim Chen, Michael Bommarito & Tyler Soellinger) at the Oxford FRAP Finance Conference at Oriel College!
Abstract: “Building upon developments in theoretical and applied machine learning, as well as the efforts of various scholars including Guimera and Sales-Pardo (2011), Ruger et al. (2004), and Martin et al. (2004), we construct a model designed to predict the voting behavior of the Supreme Court of the United States. Using the extremely randomized tree method first proposed in Geurts, et al. (2006), a method similar to the random forest approach developed in Breiman (2001), as well as novel feature engineering, we predict more than sixty years of decisions by the Supreme Court of the United States (1953-2013). Using only data available prior to the date of decision, our model correctly identifies 69.7% of the Court’s overall affirm and reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes. Our performance is consistent with the general level of prediction offered by prior scholars. However, our model is distinctive as it is the first robust, generalized, and fully predictive model of Supreme Court voting behavior offered to date. Our model predicts six decades of behavior of thirty Justices appointed by thirteen Presidents. With a more sound methodological foundation, our results represent a major advance for the science of quantitative legal prediction and portend a range of other potential applications, such as those described in Katz (2013).”
You can access the current draft of the paper via SSRN or via the physics arXiv. Full code is publicly available on Github. See also the LexPredict site. More on this to come soon …