Filtering Pandas DataFrames on dates

Posted on

Solving problem is about exposing yourself to as many situations as possible like Filtering Pandas DataFrames on dates and practice these strategies over and over. With time, it becomes second nature and a natural way you approach any problems in general. Big or small, always start with a plan, use other strategies mentioned here till you are confident and ready to code the solution.
In this post, my aim is to share an overview the topic about Filtering Pandas DataFrames on dates, which can be followed any time. Take easy to follow this discuss.

Filtering Pandas DataFrames on dates

I have a Pandas DataFrame with a ‘date’ column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months.

What is the best way to achieve this?

Asked By: AMM


Answer #1:

If date column is the index, then use .loc for label based indexing or .iloc for positional indexing.

For example:


See details here

If the column is not the index you have two choices:

  1. Make it the index (either temporarily or permanently if it’s time-series data)
  2. df[(df['date'] > '2013-01-01') & (df['date'] < '2013-02-01')]

See here for the general explanation

Note: .ix is deprecated.

Answered By: Retozi

Answer #2:

Previous answer is not correct in my experience, you can’t pass it a simple string, needs to be a datetime object. So:

import datetime
Answered By: orange1

Answer #3:

And if your dates are standardized by importing datetime package, you can simply use:

df[(df['date']>,1,1)) & (df['date']<,3,1))]

For standarding your date string using datetime package, you can use this function:

import datetime
Answered By: shm2008

Answer #4:

If your datetime column have the Pandas datetime type (e.g. datetime64[ns]), for proper filtering you need the pd.Timestamp object, for example:

from datetime import date
import pandas as pd
value_to_check = pd.Timestamp(, 1, 1)
filter_mask = df['date_column'] < value_to_check
filtered_df = df[filter_mask]
Answered By: VMAtm

Answer #5:

If the dates are in the index then simply:

Answered By: fantabolous

Answer #6:

You can use pd.Timestamp to perform a query and a local reference

import pandas as pd
import numpy as np
df = pd.DataFrame()
ts = pd.Timestamp
df['date'] = np.array(np.arange(10) +, dtype='M8[s]')
print(df.query('date > @ts("20190515T071320")')

with the output

0 2019-05-15 07:13:16
1 2019-05-15 07:13:17
2 2019-05-15 07:13:18
3 2019-05-15 07:13:19
4 2019-05-15 07:13:20
5 2019-05-15 07:13:21
6 2019-05-15 07:13:22
7 2019-05-15 07:13:23
8 2019-05-15 07:13:24
9 2019-05-15 07:13:25
5 2019-05-15 07:13:21
6 2019-05-15 07:13:22
7 2019-05-15 07:13:23
8 2019-05-15 07:13:24
9 2019-05-15 07:13:25

Have a look at the pandas documentation for DataFrame.query, specifically the mention about the local variabile referenced udsing @ prefix. In this case we reference pd.Timestamp using the local alias ts to be able to supply a timestamp string

Answered By: danielhrisca

Answer #7:

If you have already converted the string to a date format using pd.to_datetime you can just use:

df = df[(df['Date']> "2018-01-01") & (df['Date']< "2019-07-01")]

Answered By: Jerin Mathew

Answer #8:

So when loading the csv data file, we’ll need to set the date column as index now as below, in order to filter data based on a range of dates. This was not needed for the now deprecated method: pd.DataFrame.from_csv().

If you just want to show the data for two months from Jan to Feb, e.g. 2020-01-01 to 2020-02-29, you can do so:

import pandas as pd
mydata = pd.read_csv('mydata.csv',index_col='date') # or its index number, e.g. index_col=[0]
mydata['2020-01-01':'2020-02-29'] # will pull all the columns
#if just need one column, e.g. Cost, can be done:

This has been tested working for Python 3.7. Hope you will find this useful.

Answered By: Harry

Leave a Reply

Your email address will not be published.