Using sentiment analysis to capture monetary policy decisions

Inside.TechLabs
7 min readJul 6, 2019

--

As avid financial market followers, we are most intrigued by monetary policy announcements, such as the Fed’s “patient approach” or Mario Draghi’s dream of a rate hike before retirement. We’ve noticed how cumbersome it can be to eye-ball statements and deduce sentiment from them, as this takes a few minutes to do. The most important thing to capture is the rate decision (hike, maintain or cut rates); additionally, the language and terminology the FED/ECB/BofE use to convey the economic sentiment of the time also plays a huge role in financial markets.

There have been well documented cases of world-renowned hedge funds using sentiment analysis to capture monetary policy decisions in fractions of a second, so once the idea was discussed with the rest of the team, it was a no-brainer. The goal was to capture rate changes along with sentiment in the least amount of time possible; giving any aspiring trader those extra few seconds to position themselves.

Initially, there were many questions on how to about this. How are we going to get the data? Which words are we going to look for? Which libraries do we need?

Figure 1 FOMC Press Release

We set out on answering questions as they come about; i.e. Sequentially. In terms of how we got the data, we visited the Federal Reserve website and viewed the archive of historical press releases and statements. Through this, we were able to copy these into TXT files and create our own library of 165 historical Fed statements dating back to year 2000. Through the collating of data (files were named as the date of the statement), we were able to present a data frame with date, text of the statement and a Boolean variable on whether it was a conference call. Conference calls tend to represent “unplanned meetings”, usually within extreme circumstances or crises — think Lehman Brothers bailout in 2008. These are more spontaneous and are counted as extra-meetings within the normal calendar year.

The words question proved somewhat more difficult to answer. We had learnt some Natural Language Processing (using the NLTK library) in our TechLabs learning tracks and wanted to apply some of that to this problem. However, we needed a domain expert to classify some statements as “positive” and others as “negative” in order to apply machine learning and let the machine classify different words into positive and negative pools (an idea that we do not discard using in future). We also had the complication of adding the financial market subtleties, sowe ultimately decided to use a pre-classified financial dictionary — namely “Loughran and McDonald (2011)”. In this dictionary, we had words, already within a financial market context, classified as positive or negative. This could be applied to monetary policy with a few tweaks, which we then made by adding a handful of words into

Figure 2 Loughran and McDonald (2011) Financial Dictionary

the dictionary after reading some of the Fed statements. Negation was a characteristic we had to deal with, so we managed to include a “negate” feature that would pin-point the result as a non-conclusive one (e.g. “despite pressures” as an overall neutral sentiment, not a negative one). Once we managed to control for these variables, we used our function to assess all Fed statements between 2000 and the present day. Our function counted up the amount of negative words and the amount of positive words. After this, we could give a net sentiment score by doing “Positive words — Negative words”.

We plotted a few graphs of usage and net sentiment:

Figure 3 Count of positive and negative words over time
Figure 4 Net Sentiment over time

We then wanted to assess if a new statement that has sentiment which is significantly below a trailing 2-year sentiment can give us any insights into what the Federal reserve may or may not do.

Figure 5 16 Statement Moving Average Over Time

The grey blocks represent economic crisis periods (From Left to Right; Dot Com Bubble, Financial Crisis, Eurozone Crisis, Chinese Stock Market Crash). We see a meaningful drop in sentiment just prior to economic crises.

Figure 6 FOMC Press Release with Highlighted Funds Target Rate

In addition to analyzing the sentiment of the statements, we wanted to build a tool that quickly and reliably extracts the FED’s decision about federal funds target rates. This was also needed to assess if sentiment provides us any insight into whether the FED will or will not alter rates. The federal funds target rate is the interest rate at which financial institutions lend reserve balances to other depository institutions overnight on an uncollateralized basis. Speed and accuracy when extracting the target rate is of crucial importance because it plays a key role in the pricing of financial instruments with short maturities.

Figure 7 Speed test of the Target Rate extractor

To obtain the federal funds rate, we had to assess the characteristics of how it is being reported. To keep the process simple and fast, we decided to extract it based on a set of heuristics. First, the statement that is being analyzed is tokenized to a list of sentences. This is done with the NLTK library. Second, the list of sentences is looped through, and all sentences not containing the words “federal funds rate” and “target” are dropped. Now, that the rate decision sentence is picked out, the code searches for words that reveal if the rate was increased, maintained or decreased. After this, the code scans for the word “percent” and picks the range that is reported immediately before it. Since the range is reported as a fraction in the format “1–3/4 to 2 percent”, we wrote a function that converts these values into floats. We tested the function on all 165 statements and managed to extract accurately each rate decision in less than a fifth of a second, giving us a meaningful backdrop on which to trust the function for future use under time pressure.

The function is also made to be easily adaptable for other purposes. For our use, it extracts the direction whether the federal funds target rate is changed, as well as the target range itself.

Once we managed to assign corresponding rate decisions to statements along with the sentiment scores of each statement, we then viewed all information together on one plot. A 16-statement sentiment moving average seems to precede any changes in the Federal Fund Rate. The most recent dip in sentiment that we see in the first half of 2019 is a representation of the more recent fears surrounding the end of the credit cycle and the lack of growth in European countries — it could perhaps serve as an indicator for an imminent Federal Funds rate cut.

Figure 8 Net sentiment compared to the Target Rate decisions

We also did some exploratory analysis on the lengths of statements. There is arguably a correlation between the Federal Reserve Chair and who’s in that role with the length of Federal Reserve statements over time.

Figure 9 Count of words over time

However, we can also argue (putting our domain expert hats on) that Janet Yellen’s time in the Eccles Building was the most tumultuous one in recent history and that the added scrutiny from the media required her to explain more and answer more questions. It was also a period that placed a lot of emphasis on monetary policy, as the US government had no intention of embarking on an expansionary fiscal policy. This made the FED the bearer of all news when it came to economics, so perhaps she had to cover all angles.

“Let me just say…it’s not a healthy situation for monetary policy to be the only game in town. I would like to see a situation in which fiscal policy was in a better position to make a contribution on the kinds of occasions and situations we’re talking about…that’s one reason that our current fiscal situation concerns me as much as it does.” — Janet Yellen

In terms of challenges, there were obviously logistic ones — how to meet as a group, time coordination, etc. But, there were also technical ones; a number of hours were spent figuring out how to implement NLTK in our case, the other was how to capture statements that at times use different terminology (some talk about basis points and others about percentages). In terms of success moments, the most notable ones have to be the finding of the financial dictionary and the use of NLTK for tokenizing words and sentences. This likely isn’t the end of this particular topic as we could well continue to explore sentiment analysis for monetary policy announcements!

All in all, it was a great project experience and one we learnt a lot from.

Team Members

Arthur Böök: https://www.linkedin.com/in/arthurbook/

Jonathan Jurado: https://www.linkedin.com/in/jonathanjurado/

Daniel Fonrodona Palome: https://www.linkedin.com/in/danielfonrodona/

You can follow us on Twitter too!

GitHub Repository: https://github.com/JonoJurado94/Sentiment-Analysis-for-FOMC-Statements

--

--

Inside.TechLabs
Inside.TechLabs

Written by Inside.TechLabs

Our community Members share their insights into the TechLabs Experience

No responses yet