Solving problem is about exposing yourself to as many situations as possible like Import multiple csv files into pandas and concatenate into one DataFrame and practice these strategies over and over. With time, it becomes second nature and a natural way you approach any problems in general. Big or small, always start with a plan, use other strategies mentioned here till you are confident and ready to code the solution.
In this post, my aim is to share an overview the topic about Import multiple csv files into pandas and concatenate into one DataFrame, which can be followed any time. Take easy to follow this discuss.
I would like to read several csv files from a directory into pandas and concatenate them into one big DataFrame. I have not been able to figure it out though. Here is what I have so far:
import glob import pandas as pd # get data file names path =r'C:DRODCL_rawdata_files' filenames = glob.glob(path + "/*.csv") dfs =  for filename in filenames: dfs.append(pd.read_csv(filename)) # Concatenate all data into one DataFrame big_frame = pd.concat(dfs, ignore_index=True)
I guess I need some help within the for loop???
If you have same columns in all your
csv files then you can try the code below.
I have added
header=0 so that after reading
csv first row can be assigned as the column names.
import pandas as pd import glob path = r'C:DRODCL_rawdata_files' # use your path all_files = glob.glob(path + "/*.csv") li =  for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) frame = pd.concat(li, axis=0, ignore_index=True)
An alternative to darindaCoder’s answer:
path = r'C:DRODCL_rawdata_files' # use your path all_files = glob.glob(os.path.join(path, "*.csv")) # advisable to use os.path.join as this makes concatenation OS independent df_from_each_file = (pd.read_csv(f) for f in all_files) concatenated_df = pd.concat(df_from_each_file, ignore_index=True) # doesn't create a list, nor does it append to one
import glob import os import pandas as pd df = pd.concat(map(pd.read_csv, glob.glob(os.path.join('', "my_files*.csv"))))
The Dask library can read a dataframe from multiple files:
import dask.dataframe as dd df = dd.read_csv('data*.csv')
The Dask dataframes implement a subset of the Pandas dataframe API. If all the data fits into memory, you can call
df.compute() to convert the dataframe into a Pandas dataframe.
Almost all of the answers here are either unnecessarily complex (glob pattern matching) or rely on additional 3rd party libraries. You can do this in 2 lines using everything Pandas and python (all versions) already have built in.
For a few files – 1 liner:
df = pd.concat(map(pd.read_csv, ['data/d1.csv', 'data/d2.csv','data/d3.csv']))
For many files:
from os import listdir filepaths = [f for f in listdir("./data") if f.endswith('.csv')] df = pd.concat(map(pd.read_csv, filepaths))
This pandas line which sets the df utilizes 3 things:
Easy and Fast
Import two or more
csv‘s without having to make a list of names.
import glob df = pd.concat(map(pd.read_csv, glob.glob('data/*.csv')))
Edit: I googled my way into https://stackoverflow.com/a/21232849/186078.
However of late I am finding it faster to do any manipulation using numpy and then assigning it once to dataframe rather than manipulating the dataframe itself on an iterative basis and it seems to work in this solution too.
I do sincerely want anyone hitting this page to consider this approach, but don’t want to attach this huge piece of code as a comment and making it less readable.
You can leverage numpy to really speed up the dataframe concatenation.
import os import glob import pandas as pd import numpy as np path = "my_dir_full_path" allFiles = glob.glob(os.path.join(path,"*.csv")) np_array_list =  for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) np_array_list.append(df.as_matrix()) comb_np_array = np.vstack(np_array_list) big_frame = pd.DataFrame(comb_np_array) big_frame.columns = ["col1","col2"....]
total files :192 avg lines per file :8492 --approach 1 without numpy -- 8.248656988143921 seconds --- total records old :1630571 --approach 2 with numpy -- 2.289292573928833 seconds ---
If you want to search recursively (Python 3.5 or above), you can do the following:
from glob import iglob import pandas as pd path = r'C:useryourpath***.csv' all_rec = iglob(path, recursive=True) dataframes = (pd.read_csv(f) for f in all_rec) big_dataframe = pd.concat(dataframes, ignore_index=True)
Note that the three last lines can be expressed in one single line:
df = pd.concat((pd.read_csv(f) for f in iglob(path, recursive=True)), ignore_index=True)
You can find the documentation of
** here. Also, I used
glob, as it returns an iterator instead of a list.
EDIT: Multiplatform recursive function:
You can wrap the above into a multiplatform function (Linux, Windows, Mac), so you can do:
df = read_df_rec('C:useryourpath', *.csv)
Here is the function:
from glob import iglob from os.path import join import pandas as pd def read_df_rec(path, fn_regex=r'*.csv'): return pd.concat((pd.read_csv(f) for f in iglob( join(path, '**', fn_regex), recursive=True)), ignore_index=True)