Billow
the quickest way to find the price of your dreamhouse.
This project was part of “the Digital Shaper Program” hosted by Techlabs Copenhagen in autumn semester 2020/2021
Abstract
The housing market is heated in many cities and it is often hard for new home buyers to gain an overview of housing prices in a particular area in a short amount of time without a lot of inconvenience. The purpose of this assignment is to build an MVP that solves this issue via the development of a website and prediction models.
In order to offer an accurate prediction of housing prices for home buyers, we built an interactive website, using various web development programming languages and frameworks, where people can input the number of bedrooms, number of bathrooms, the size of the living area, and zip code to find the corresponding house price. We built a prediction model in Python by testing different models and choosing the one with the highest accuracy. It is noted that due to limitations of the dataset and challenges of time constraints and technological issues that arose during the project, the project could be further improved in terms of accuracy of the prediction models and interaction between backend and frontend.
In conclusion, we built a functioning and interactive website and selected the best prediction model with an accuracy of 62%.
Introduction
Being a first-time homeowner or buying property in a new area is a life milestone. Due to a lack of information, new buyers seldom have a good overview of what they can best get with their money. This means they may have to spend months visiting a lot of homes and comparing their prices, or resort to using property agents who charge high fees. We believe that finding a dream home within budget should not be that difficult. Therefore, we created Billow.
Billow is a web application that accurately predicts home prices based on historical market data and a pricing model. According to user-selected parameters, Billow advises the home buyer on a realistic budget for their dream home in that particular area. Billow also offers an overview of options of homes within the price range. Billow is a tool that democratises the housing market, offering transparency, accuracy, and ultimately control over their own budget for home buyers.
Methodology
Data Science
Getting Started
During the research phase, we started to collect data about real estate and looked at existing real estate websites. Through our research we became more aware of important and by users valued parameters for real estate.
Data Set
Searching for data sets
In our search for suitable data sets, our main focuses lied on the parameters like price, sqm, location and number of rooms. Most of the data sets we encountered did not meet all of the above-described parameters and therefore were of no value for our project. After extending the search, we encountered three useful datasets, which were from France (Demandes de Valeurs Foncières (DVF) — Data.Gouv.Fr, 2020), Germany (Regionaldatenbank Deutschland: Regionaldatenbank Deutschland, n.d.) and Boston, USA. As our working language is English, we decided to use the Boston data set, which we gathered from the Realtor.com website (Realtor.Com Real Estate Data and Market Trends for Download, n.d.).
Exploratory Data Analysis
After downloading the dataset, we conducted exploratory data analysis (EDA) in order to get a better overview of the obtained data. The EDA helped us not only to collect valuable information about size, completeness and different parameters of the data set, but also to gain a better understanding of the correlations of the parameters. We were then able to identify those parameters deemed most relevant for the prices of homes. With the describe(), info() and head() functions, we found out that our dataset consisted of 566 entries and 13 parameters that were presented in integers, floats, and booleans.
Next, we aligned all the entries into one common type in order to work and adjust them for our housing price model. Furthermore, with the corr() function we see the correlation between the different data points. We then can choose which data points to keep based on how useful they are from the user’s perspective, as well as their level of influence on the price. Additionally, we created several graphs and a heatmap in order to visually analyse and present the data set. Visualising the data set helped identify its structure and outliers, which gave a more concrete understanding of correlations between parameters. The exploratory data analysis is a great way to get to know the data set and an essential step before the data cleaning.
Data cleaning
In order to build an accurate and working price prediction model with our dataset, we had to properly clean the dataset beforehand. It is essential that the dataset is uniform and contains no missing or false information. Nevertheless, there were missing data points, which we decided to fill in by forming averages of that parameter from all the entries. We also discussed deleting incomplete entries, but as the data set only consisted of 566 data entries, we did not want to downsize it even more. Furthermore, certain parameters like specific area, master bedroom length & width, and kitchen length & width have been dropped, as we deemed them as not important from a user perspective, and they also had a low correlation to the final price of the house. We decided not to implement PCA-analysis as a means to determine which parameters to drop. This was because PCA is mostly used for very large datasets with many different parameters, where it is possible to drop parameters whilst still retaining most of the information and variety of the dataset. We concluded that our dataset had too few entries and parameters to effectively make use of PCA.
Lastly in the data set cleaning process to improve accuracy, we decided to drop the outliers that could skew the dataset. Even though it is frowned upon from a statistical perspective, we dropped the outliers due to a large drop in accuracy of our models that they caused which could be interpreted as a sign of underfitting. We decided to drop the values that had a sold price of over 3.500.000 because, as can be seen on the figure down below, those were the values that varied the most compared to the rest of the dataset.
As seen on the right, there is a clearer correlation between the different data points as well fewer outliers. The data set is now clean and ready to be worked with, in order to build the price prediction model.
Building the price prediction model
The dataset varies significantly in terms of the scale in columns like zip code and living area This variation could be problematic in the prediction model finding and defining weights and could lead to suboptimal models.
To accommodate that problem, we decided to normalise the dataset which rescales the feature to be between [0; 1]. We decided to use MinMaxScaler instead of Standardizer, which centers the columns at mean 0 with a standard deviation of 1, because MinMaxScaler preserves the shape of the dataset (Hale, 2019). To implement the scaler, we first created the function and afterwards fit and transform X to the newly created minmaxscaler function.
To maintain a realistic output of house prices we decided to only scale the inputs (X) and keep the output (y) untouched. To make it easier to work with, we renamed the columns back into the correct names such as ‘zip.code’ and ‘num.beds’.
Next, we needed to build a model. As there are a lot of different ways to predict we decided to build three different models for comparison and choose the one that has the best accuracy. Before we can start building the model, we need to make sure that the data is ready. We start by defining which variable is the output and which variables are the input and define them as y and X respectively.
Afterwards, we split the data into training and tests set with a split of 0.3 to 0.7 and a random_state of 42. To maintain accuracy and a true testing environment, we kept the random_state at 42 in all three models.
The first model we built was a linear regression model as we wanted to predict the value of price based on the other selected variables. As we have more than one explanatory variable, we have used multiple linear regression. We decided to go with multiple linear regression instead of logistic regression as our dataset contains floats which are not compatible with logistic regression. To apply linear regression on the dataset we first made the function for linear regression and then fitted the dataset to regression.
To test the accuracy of the model, we first found the R² which measures how close the data are to the fitted regression line.
As shown above, our R² = 0,337, which is quite low. This means that the model only explains 33.7% of the variability of the response data around its mean. As we only tested the test data, this is not a clear indication of how accurate the model is. To find out how accurate the model is, we made use of cross validation scores which measure how accurate the model will be in practice. Cross validation groups a sample of data into complementary subsets and performs the analysis on the test set and validates the analysis on the training set. To reduce variability, this process is performed multiple times and then gets averaged out.
As shown above, our CV Score ended up being 0.473, which means that the model has an accuracy of 47.3%. This is still inadequate and not satisfactory for a good model. The other two models that we decided to test are GradientBoostingRegressor and MLPregressor (Multi-Layered Perceptron Regressor). GradientBoostingRegressor is used because of its unique ability to create powerful models based on ensembles. GradientBoostingRegressor makes use of ensembles which creates a model of regression trees and then creates a second model that focuses on finding the places where the first model performs poorly. This is based on the assumption that both models working together are better than one model alone (Hoare, J, n,d.). The prediction model then went through the same splitting and testing as the linear regression models. After fine tuning the model and its different parameters for optimal accuracy, we got an accuracy of 62%.
The last model we tested was MLPRegressor. MLPRegressor was chosen because of its nature as a neural network (Scikit-learn n,d.). To implement the regressor we adjusted the samples and features to be equivalent to the rows and columns in the dataframe.
After fine tuning the model, we fitted the regressor to the training sets. We then again made use of the CV score. Below are the corresponding CV scores of the three models for comparison:
After we built the models and found their CV Score, we can see that the Gradient Boosting Regressor is the most accurate by a large margin. To see how good they are in practice, we took a known input from our dataset and compared the output of house prices with the actual house price and the MAE (Mean Absolute Error) of all three models.
As shown above, the GradientBoosterRegressor has both the highest CV Score and the lowest MAE and is therefore the best prediction model.
Web development
Frontend
Start with the user
In our project we did not have a UX designer. The responsibility then lies on the frontend developer. Starting with the user who is experiencing the problem of a lack of housing market information, Billow can help inform decisions on both relevant input and output. Through a brainstorming session, we defined the target user as someone already living in Boston, first-time home buyer and 33 years old who has a higher education. We determined this by looking at the problem we were trying to solve and who we were trying to solve it for. Based on data, we see the problem as most prevalent amongst inexperienced first-time home buyers, who are around 33 years of age in Boston (Zillow, 2015). We also assume that a buyer with higher education will be more inclined to do research before buying a home, and his/her income level is more relevant for this geographical area. According to this defined persona, we tried to understand what features are important and how to best create a meaningful experience for them.
After a discussion with the data science team, we decided on square feet, zip code, bedrooms and bathrooms as the most meaningful variables from a user’s perspective. We then decided to offer the following features in the minimal viable product: estimated price, price range, overview of house prices in the zip code, and what kind of housing that you could expect to get in the price range. Then we built a visual frontend using Figma and started coding the actual frontend afterwards. We chose Figma as it is one of the leading design tools.
Programming languages
When it comes to frontend programming, there is a limited set of programming languages that is usually used. For this project we used HTML, CSS and JavaScript. HTML is the skeleton of the page. It holds the information and makes the content readable for browsers. CSS, or cascading style sheets, makes it possible to style the HTML. This makes the frontend visually appealing and improves the UX. JavaScript adds functionality to the site and makes it possible to develop meaningful and useful web applications.
CSS
When using CSS one can use different frameworks that offer a set of predefined classes and systems to easily make the web application responsive and consistent. For this project we used Bootstrap. We chose Bootstrap as our primary framework as it offers a mobile first approach and pre-written classes for quick development. It is also one of the most widely adopted CSS frontend frameworks and the only CSS framework. We used the grid system to make the layout of our web application responsive. (Bootstrap, n.d.)
To get the desired result as shown above, we used the grid structure as shown below with two columns that take up half of the available in the row.
JavaScript
JavaScript can be used as vanilla JavaScript, which is the raw and foundational JavaScript programming language (TIM, 2020). For different cases it is often helpful to use JavaScript frameworks such as React, Vue, Angular and many more. Due to the scope of the project, we decided on using vanilla JavaScript and added Chart.js as a library for faster development of the charts used to showcase the data (Chart.js, n.d.). A JavaScript library is a library of pre-written JavaScript that allows for faster development of web applications. We used JavaScript to build the search functionality of the site, show data based on the search and personalised the results. To get a grasp of how we utilised JavaScript to build the search functionality we will display the code below.
This first code snippet is the form that the user fills out in order to make the search. This has an attached “id” and the method is “GET”. When a user types in these variables they are stored as constant values in JavaScript.
Below you can see how we store the parameters as constant variables. The reason to choose a constant variable is that it will not change dynamically unless the whole search is done again and then this will change the constant value.
When these constants are available, we can use them throughout the programme to filter the data and display the relevant information to the user.
Backend
Requirements and scope
The primary goal was to have a presentable running prototype. As the web developer, the primary task was to make the data frame interact with the front end and to display the necessary tasks of our prototype in our frontend. In order to meet the requirements of our MVP, we set certain requirements of essential tasks that shall be displayed in the frontend using data retrieved from a JSON file.
By inserting the zip code and square feet of the living area by the client in the search functionality of our web application, we wish to display the highest, lowest and average value of the properties for the responsible zip code, as well as predict a value based on the different variables entered by the user.
Options
Multiple backend development framework options come to mind when considering how to connect the frontend with the data frame and carry out all the essential tasks within our scope and requirements of this project. Backend web development is done on the server-side web programming and typically uses a database. . A “traditional backend” consists of three major components: Server-side language & framework, Database and Server. When we did our research, we had different options such as Flask (Python-based server-side programming language), Node.js (JavaScript-based server-side programming language), MongoDB (Database) or a JSON file (interchange format).
For the scope of our project and for the sake of realising a running prototype, a “traditional” backend web development framework was not necessary. This is because we want to make a simple mock-up, intended to be used for presentation purposes only. Hence, we decided to use local storage in the browser to temporarily store data as a valid JSON file for a quick presentation. We run it locally on our computer and access it through a file, as our project only needs to run on a client’s platform. Thus, creating a server platform (Web Server and database) or an API, is not necessary for the scope of our project. Furthermore, the size of the data we use is considered as relatively small, whereas databases are usually used to store very large amounts of data. To conclude, we are building an application that does not necessarily need user/server communication as well as a backend-based database.
Considering the scope and the requirement of our project mentioned above, we proceeded with converting the data and creating a JavaScript file to carry out the necessary tasks for the prototype.
Converting Python data to a JSON file format
In order to connect the data frame with the frontend, we first needed to transform the data frame in Python to JSON. JSON is an acronym and stands for JavaScript Object Notation and is a lightweight interchange format, commonly used to store data as it is easy to read and use by humans and easy to parse and generate for machines. JSON runs in JavaScript and therefore facilitates the process of reading the JSON data file. Using JSON is very advantageous as it can easily be converted into a JS Object. Once created a JS object, one can access pieces of that data easily through JavaScript.
Creating JS script that carries out all the necessary tasks for the Prototype
Before we can interact with the data frame with the search functionality from the frontend, we first had to assign parameters with the corresponding properties of the data set. To define the variables, we used const.
For loop + if statement
We utilised an if statement within a for loop to specify to push the data of the corresponding Properties within the zip code, requested by the user if the condition is true. So, if the zip code entered corresponds with a zip code from the dataset, the for loop is utilized to repeat a loop at least once, as it repeats itself until a specified condition evaluates to a false. Inside the for loop, we declared the variable property using const.
Math function request
We declared the variables using let and then use the math functions Math.max to return the highest value and Math.min to return the lowest value from the list of numeric values of the properties (propertyPrices) within the specified zip code. As just referring to the Object of propertyPrices would lead to an Error, we added the spread syntax. The spread syntax, also known as ellipsis, is a three dot notation (…) , that lets one pass any number of objects of a specific type (in this case all values of propertyPrices). This allows us to pass any number of arguments.
Carrying out the property price prediction functionality
In order to carry out the price prediction based on the parameter “livingarea”, we declare the variable “predictionValue” using “let” to hold the value of our Price prediction. Simply by the multiplication of the “livingarea” size in square foot (“m2”) times the median price per square foot in Boston (Boston tops this list for the highest price per square foot, 2021).
To note is, that the computed prediction value is an easy alternative solution to make the MVP work, but not the way we intend to incorporate in the future.
Project results
Solution
The goal for our project was to develop a meaningful web application that solves a real issue for the user, and in the process learning to ideate, plan, develop and iterate on our solution.
The final solution is a prediction algorithm with 62% accuracy that can predict the price of a house based on the number of bedrooms, number of bathrooms, the size of the living area, and zip code. The final solution is made to be used for both mobile and desktop application, because approximately half of all web traffic is from the mobile (Statista, 2020).
The first page that the user sees is the search page. This is where they can make their search based on the criteria that they decide is important for their future home. The final result of the search page is a simple and responsive website with the menu at the top.
When the user inserts their search criteria, they will get to the result page. This is where they can see the predicted price, more information about the price range and the type of housing they can expect in the area. If one were to make a search with 2 bedrooms, 1900 square feet, zip code 2108 and 1 bathroom, this result would be displayed to the user:
The user is first presented with the predicted price based on their search criteria. The price prediction is not based on the actual prediction algorithm as we did not have the necessary time to create a backend that was able to do it dynamically. Instead, it shows customised results based on the search and also what the user can expect in the price range based on the criteria.
Business results
In order for Billow to become a profitable project with a potential to grow, it will need a proper business model, scaling options and a sustainable competitive advantage. For the start
Business Model
For the start of the project, the best and easiest way to commercialize Billow would probably be advertisement. Advertisement is an easy and uncomplicated way that also works with a rather low and only regional website traffic, as in our case Boston. Additionally, it would be essential for Billow to leverage as much user data as possible while adhering to local law and regulations, in order to gather information and create a big dataset. This dataset can be used, not only to further optimize the website, user experience and personalized/targeted ads, but also a commodity in itself. In later stages, when Billow hits a critical amount of website traffic, further possibilities to commercialize and scale Billow’s business model would be to offer an affiliate business model to real estate agents in the area. Billow can also be converted into a marketplace, where users have the possibility to upload their own property and get reviewed for sales by Billow. As a successful marketplace needs to attract buyers and sellers, it could be a good idea to first implement a model where potential sellers can put in the data of their existing houses, and an estimated price of the value of their owned house.
Scaling Billow
The new business model could fund further scaling of Billow with coverage of different cities and countries. As the starting point of Billow is Boston, the next logical step would be to include all of Massachusetts, which is the state Boston is located in. Furthermore, bigger metropoles and their corresponding states like New York and Washington D.C should be implemented. An essential requirement for this kind of scaling is the access to real estate data in those areas. This can be done by buying or cooperating with data banks or bigger real estate companies active in those areas. Of course, the goal in the long run will be to leverage enough own data to create them ourselves.
To best leverage data and create a dynamic web application, we should focus on building a resilient backend system, consisting of a database, a web server and an API. This will mean having a stronger foundation for scaling and building competitive advantage, as well as being able to handle a lot of data and http requests, in case we leverage enough data as well as having a large number of users for our web application. The frontend should focus on implementing a better structure of the codebase and implement the JavaScript framework React.
Competitive Advantage
Lastly, in order for Billow to fully establish as a player in the market it needs sustainable competitive advantages. Here our main focus is the web interface, which promotes user friendly, easy, uncomplicated and fast usage. Furthermore, the creation of a proprietary data set as well as a recommendation engine, which will be based on other data inputs, can lead to Billow taking the necessary edge to compete in an otherwise saturated market.
Limitations & Challenges
Data Science
One limitation we encountered was in the analysis of the data due to a relatively small-sized data set. This could be argued to lead to less precise predictions as we do not have enough samples and variations, which can lead to large fluctuations. We do however still determine that our findings are of empirical value as 560 entries are enough and over the statistical minimum of 100 (Brack & Bullen, n.d.). Furthermore, some of the data points were booleans and therefore not relevant for our models which decreased our options for finding relevant parameters that had high correlation with the housing price.
Another challenge that we encountered was determining which models to use for our project. We solved this challenge by finding inspiration beyond our curriculum and looking at different data science knowledge bases and communities. There we found out what other people used and decided what models to use by looking at our dataset and the theory behind the models. The last challenge we faced was the fact that the connection between Jupyter Notebook wasn’t optimal which led to a large period of time where only one data scientist could work on the code. We solved that challenge via talking with our mentor and troubleshooting on the internet.
Web development
The solution was based on vanilla JavaScript which would not be ideal to scale the service and add more functionality. We should use a framework such as React because it would create the best conditions for scaling the solution. The biggest challenge from a Webdev perspective was the initial plan of building a backend and connecting it with the frontend. In the beginning of the project phase, when we tried to define the scope of our project and discussed what we wanted to achieve, we were very ambitious concerning Backend Web Development, whereas we had little knowledge at the time of what a Backend actually was and underestimated its complexity. The Web Development Bootcamp course started out with the basic front end skills and progressed only after two thirds completion of the course towards several different Backend Web development frameworks and processes, including creating databases (e.g., mongo db.). To point out is that the course started simply at the beginning with HTML and CSS, which were fairly straight forward concepts to understand. Thus, boosting our confidence and leading to create a very ambitious initial project plan. But as we progressed in the curriculum of web development, we started to realise that our initial plan was for one, far too ambitious and two, unnecessary for the scope of this project. Thus, as we progressed in the course, we re-evaluated and adjusted the scope of our project as well as of our MVP and ended up with a presentable and working prototype.
Community-based learning
Working with community-based learning, there are both pros and cons. The pros are that the learner has ownership over the ideation, project planning and execution. This leads the people involved to be motivated, curious and helpful. Furthering that point, is the guidance that comes from the mentors that encourage independent thinking and problem solving. Even though there are a lot of benefits to gain from community-based learning, there are some challenges and limitations. As a lot of the learning is assumed to be achieved through interaction with peers and the community, the learning is dependent on the quality of the community. If the community is not active and/or of subpar quality, you risk leaving questions unanswered as there is no one to answer them. In addition, there is no instructor or teacher that has the correct answers which can lead to problems if you are in doubt.
Conclusion
In conclusion, we managed to address a problem by building a technical MVP through the application of our data science and web development skills. Our MVP is an easy to use and mobile first web application that is able to let first time home buyers get a quick overview of their dream house in Boston. In the project we were able to identify, ideate and design an entrepreneurial MVP by applying our substantial tech skills in data science and web development. Our project shows that we were able to use Python to create a prediction algorithm based on the dataset we found. We were able to work together between data science and web development to plan the project, set milestones and determine the relevant features, input and output. During the project we discovered the complexity of building a technical solution and how you learn by asking better questions and iteratively developing your MVP. The business results show that there is a possibility for scaling Billow and creating long term sustainable competitive advantage. The secondary findings from our project are when, where, and how to set new expectations based on the limitations and challenges that we were facing which is a big part of the iterative development process.
Further perspectives
Data Science
To further the development of Billow, there are a couple of factors that are needed to scale. The largest being the relatively small dataset that is limiting not only which areas we can predict houses in, but also the model accuracy itself. It is therefore paramount for further development that we collect more data about house prices and the different parameters. Another perspective for further development is improving the models by iteratively trying out new models as well as adding new features like comparison between zip code and the development of housing price over the last years in that specific area.
Web development
To further enhance the development of Billow, there are a couple of key components that stand out throughout the paper as the most important.
First, is to build a strong backend and database using node.js and mongo.db to store the data which will create the foundation for scaling and incorporating different public datasets to stay ahead of the competition. Node.js is a lightweight JavaScript-based framework, which by utilizing would increase the stability and promptitude of our web application. Mongo.db is a database that allows us to store larger amounts of data if we would potentially increase the scope of our project and handle more data. Node.js and mongo.db are commonly used together as they work very well together. Moreover, the get, push, update and delete function with Node.js, would facilitate the maintenance and processing of the mongo.db database. Second, is to build the frontend based on React that receives data from the backend via an API. It would also be prudent to do user testing to redesign the UX and adjust the output based on the feedback. Lastly, as to how we would connect the frontend with the backend, using a RESTfulAPI would be our preferable solution as it allows us a great deal of flexibility. Since an API serves as a mediate between the user client and the resources or web services they want to get. It is also a way for us to share resources and information while maintaining security, control, and authentication, helping us determine who gets access to what.
Bibliography
Websites
Bootstrap. (n.d.). Grid system. Retrieved January 21, 2021, from https://getbootstrap.com/docs/4.0/layout/grid/
Boston.com Real Estate. (2021). Boston Tops This List For The Highest Price Per Square Foot. Retrieved January 2021, from
http://realestate.boston.com/news/2020/01/10/boston-tops-list-highest-price-per-square-foot/
Brack, T., & Bullen, P. B. (n.d.). About | tools4dev. Tools4dev. Retrieved January 21, 2021, from http://www.tools4dev.org/about/
Chart.js. (n.d.). Chart.js | Open source HTML5 Charts for your website. Retrieved January 21, 2021, from https://www.chartjs.org/
Demandes de valeurs foncières (DVF) — data.gouv.fr. (2020, June). Data.Gouv.Fr. https://www.data.gouv.fr/en/datasets/demandes-de-valeurs-foncieres/#_
Hale, J. Scale, Standardize, or Normalize with Scikit-Learn. Retrieved January 21, 2021, from https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02
Hoare, J. (2020, December 8). Gradient Boosting Explained — The Coolest Kid on The Machine Learning Block. Displayr. https://www.displayr.com/gradient-boosting-the-coolest-kid-on-the-machine-learning-block/
Realtor.com Real Estate Data and Market Trends for Download. (n.d.). Realtor.Com Economic Research. Retrieved January 21, 2021, from https://www.realtor.com/research/data/
Regionaldatenbank Deutschland: Regionaldatenbank Deutschland. (n.d.). Regionaldatenbank Deutsland. Retrieved January 21, 2021, from https://www.regionalstatistik.de/genesis/online/
Scikit-learn. (n,d.). sklearn.neural_network.MLPRegressor.
Retrieved January 21, 2021, from https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html
Statista. (2020, November 19). Share of global mobile website traffic 2015–2020. https://www.statista.com/statistics/277125/share-of-website-traffic-coming-from-mobile-devices/
TIM. (2020, August 28). What is Vanilla JavaScript? A simple explanation. This Interests Me. https://thisinterestsme.com/vanilla-javascript/
Zillow. (2015, August 17). Today’s First-Time Homebuyers Older, More Often Single. http://zillow.mediaroom.com/2015-08-17-Todays-First-Time-Homebuyers-Older-More-Often-Single
Courses
Steele, C. (2020, December). The Web Developer Bootcamp: Learn HTML, CSS, Node, and More! Udemy. https://www.udemy.com/course/the-web-developer-bootcamp/
Mead, A. (2020, October). The Complete React Developer Course (w/ Hooks and Redux). Udemy. https://www.udemy.com/course/react-2nd-edition/
DataCamp. (n.d.). Data Scientist with Python. Retrieved January 21, 2021, from https://www.datacamp.com/tracks/data-scientist-with-python
Authors:
Felix Nitschke
Hannibal O. Herforth
Lukas Dissing Lundsgaard
Marc Ndiaye