This page provides you with instructions on how to extract data from Google Cloud SQL and analyze it in Metabase. (If the mechanics of extracting data from Google Cloud SQL seem too complex or difficult to maintain, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Google Cloud SQL?
Google Cloud SQL is a managed database service that lets DBAs set up, maintain, and administer MySQL and PostgreSQL databases on Google Cloud Platform.
What is Metabase?
Metabase provides a visual query builder that lets users generate simple charts and dashboards, and supports SQL for gathering data for more complex business intelligence visualizations. It runs as a JAR file, and its developers make it available in a Docker container and on Heroku and AWS. Metabase is free of cost and open source, licensed under the AGPL.
Getting data out of Google Cloud SQL
In most cases, the easiest way to retrieve data from relational databases is by writing SQL queries.
Google also provides a REST API for administering databases, instances, and other objects in Cloud SQL. So, for example, to retrieve a resource containing information about a database inside a Cloud SQL instance for a particular project, you could call
If your underlying database is PostgreSQL, you can use the
pg_dump command to export data as a CSV-format flat file or a script that you can run to restore the database on any Postgres server. If your underlying database is MySQL, you can use the
mysqldump command to export entire tables and databases in a format you specify (i.e. delimited text, CSV, or SQL queries that would restore the database).
Sample Google Cloud SQL data
The GET call we mentioned would return a database resource, which contains seven properties. Other API calls return different resources.
For data you export via SQL query, pg_dump, or mysqldump, you need a matching table in your data warehouse to receive the data from Cloud SQL. The information_schema database contains all of the metadata information you need to recreate your tables in another environment.
Preparing Google Cloud SQL data
If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Google's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.
Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.
Loading data into Metabase
Metabase works with data in databases; you can't use it as a front end for a SaaS application without replicating the data to a data warehouse first. Out of the box Metabase supports 15 database sources, and you can download 10 additional third-party database drivers, or write your own. Once you specify the source, you must specify a host name and port, database name, and username and password to get access to the data.
Using data in Metabase
Metabase supports three kinds of queries: simple, custom, and SQL. Users create simple queries entirely through a visual drag-and-drop interface. Custom queries use a notebook-style editor that lets users select, filter, summarize, and otherwise customize the presentation of the data. The SQL editor lets users type or paste in SQL queries.
Keeping Google Cloud SQL data up to date
At this point you've coded up a script or written a program to get the data you want and successfully moved it into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.
Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Google Cloud SQL.
And remember, as with any code, once you write it, you have to maintain it. If Google modifies its API, or the API sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.
From Google Cloud SQL to your data warehouse: An easier solution
As mentioned earlier, the best practice for analyzing Google Cloud SQL data in Metabase is to store that data inside a data warehousing platform alongside data from your other databases and third-party sources. You can find instructions for doing these extractions for leading warehouses on our sister sites Google Cloud SQL to Redshift, Google Cloud SQL to BigQuery, Google Cloud SQL to Azure Synapse Analytics, Google Cloud SQL to PostgreSQL, Google Cloud SQL to Panoply, and Google Cloud SQL to Snowflake.
Easier yet, however, is using a solution that does all that work for you. Products like Stitch were built to move data automatically, making it easy to integrate Google Cloud SQL with Metabase. With just a few clicks, Stitch starts extracting your Google Cloud SQL data, structuring it in a way that's optimized for analysis, and inserting that data into a data warehouse that can be easily accessed and analyzed by Metabase.