If you have to analyse a lot of data you need a good backend to fetch and store your results piecewise in small chunks try MonetDB(Lite). I often used CSV or TSV files, because it is convenient. But from time to time you have to do some filtering, joins or other data-operations. So I looked for a nice database and MonetDB seems to have some nice properties...

I noticed that google cloud storage is a bit cheaper than Amazon S3. Although, it seems not much, it can be a lot if you have some big data sets or huge backups stored in Amazon`s storage solution. This tutorial show how you can migrate from/to Google Cloud Storage. But keep in mind that you will have to pay for the file transfair!

Apache Drill is a really easy to use dremel-based big-data analysis tool. So it's perfect if you have a lot of static data (read-only workloads) and want to use SQL. And the best of all: it does not require Hadoop/HDFS :)