Logging
The Supabase Platform includes a Logs Explorer that allows log tracing and debugging. Log retention is based on your project's pricing plan.
Product logs
Supabase provides a logging interface specific to each product. You can use simple regular expressions for keywords and patterns to search log event messages. You can also export and download the log events matching your query as a spreadsheet.
Logging Postgres queries
By default, only DDL statements are logged for new Supabase projects.
For projects created before 18th July 2023, statement execution logging is disabled completely by default.
To enable query logs for other categories of statements:
- Enable the pgAudit extension.
- Restart your project using the Fast database reboot option.
- Configure
pgaudit.log
(see below). Perform a fast reboot if needed. - View your query logs under Logs > Postgres Logs.
Configuring pgaudit.log
The stored value under pgaudit.log
determines the classes of statements that are logged by pgAudit extension. Refer to the pgAudit documentation for the full list of values.
To enable logging for function calls/do blocks, writes, and DDL statements for a single session, execute the following within the session:
_10-- temporary single-session config update_10set pgaudit.log = 'function, write, ddl';
To permanently set a logging configuration (beyond a single session), execute the following, then perform a fast reboot:
_10-- equivalent permanent config update._10alter role postgres set pgaudit.log to 'function, write, ddl';
To help with debugging, we recommend adjusting the log scope to only relevant statements as having too wide of a scope would result in a lot of noise in your Postgres logs.
Note that in the above example, the role is set to postgres
. To log user-traffic flowing through the HTTP APIs powered by PostgREST, set your configuration values for the authenticator
.
_10-- for API-related logs_10alter role authenticator set pgaudit.log to 'write';
By default, the log level will be set to log
. To view other levels, run the following:
_10-- adjust log level_10alter role postgres set pgaudit.log_level to 'info';_10alter role postgres set pgaudit.log_level to 'debug5';
Note that as per the pgAudit log_level documentation, error
, fatal
, and panic
are not allowed.
To reset system-wide settings, execute the following, then perform a fast reboot:
_10-- resets stored config._10alter role postgres reset pgaudit.log
If any permission errors are encountered when executing alter role postgres ...
, it is likely that your project has yet to receive the patch to the latest version of supautils, which is currently being rolled out.
Logging realtime connections
Realtime doesn't log new WebSocket connections or Channel joins by default. Enable connection logging per client by including an info
log_level
parameter when instantiating the Supabase client.
_10import { createClient } from '@supabase/supabase-js'_10_10const options = {_10 realtime: {_10 log_level: 'info',_10 },_10}_10const supabase = createClient('https://xyzcompany.supabase.co', 'public-anon-key', options)
Logs Explorer
The Logs Explorer exposes logs from each part of the Supabase stack as a separate table that can be queried and joined using SQL.
You can access the following logs from the Sources drop-down:
auth_logs
: GoTrue server logs, containing authentication/authorization activity.edge_logs
: Edge network logs, containing request and response metadata retrieved from Cloudflare.function_edge_logs
: Edge network logs for only edge functions, containing network requests and response metadata for each execution.function_logs
: Function internal logs, containing anyconsole
logging from within the edge function.postgres_logs
: Postgres database logs, containing statements executed by connected applications.realtime_logs
: Realtime server logs, containing client connection information.storage_logs
: Storage server logs, containing object upload and retrieval information.
Querying with the Logs Explorer
The Logs Explorer uses BigQuery and supports all available SQL functions and operators.
Timestamp display and behavior
Each log entry is stored with a timestamp
as a TIMESTAMP
data type. Use the appropriate timestamp function to utilize the timestamp
field in a query.
Raw top-level timestamp values are rendered as unix microsecond. To render the timestamps in a human-readable format, use the DATETIME()
function to convert the unix timestamp display into an ISO-8601 timestamp.
_10-- timestamp column without datetime()_10select timestamp from ...._10-- 1664270180000_10_10-- timestamp column with datetime()_10select datetime(timestamp) from ...._10-- 2022-09-27T09:17:10.439Z
Unnesting arrays
Each log event stores metadata an array of objects with multiple levels, and can be seen by selecting single log events in the Logs Explorer. To query arrays, use unnest()
on each array field and add it to the query as a join. This allows you to reference the nested objects with an alias and select their individual fields.
For example, to query the edge logs without any joins:
_10select timestamp, metadata from edge_logs as t;
The resulting metadata
key is rendered as an array of objects in the Logs Explorer. In the following diagram, each box represents a nested array of objects:
Perform a cross join unnest()
to work with the keys nested in the metadata
key.
To query for a nested value, add a join for each array level:
_10select timestamp, request.method, header.cf_ipcountry_10from_10 edge_logs as t_10 cross join unnest(t.metadata) as metadata_10 cross join unnest(metadata.request) as request_10 cross join unnest(request.headers) as header;
This surfaces the following columns available for selection:
This allows you to select the method
and cf_ipcountry
columns. In JS dot notation, the full paths for each selected column are:
metadata[].request[].method
metadata[].request[].headers[].cf_ipcountry
LIMIT and result row limitations
The Logs Explorer has a maximum of 1000 rows per run. Use LIMIT
to optimize your queries by reducing the number of rows returned further.
Best practices
- Include a filter over timestamp
Querying your entire log history might seem appealing. For Enterprise customers that have a large retention range, you run the risk of timeouts due additional time required to scan the larger dataset.
- Avoid selecting large nested objects. Select individual values instead.
When querying large objects, the columnar storage engine selects each column associated with each nested key, resulting in a large number of columns being selected. This inadvertently impacts the query speed and may result in timeouts or memory errors, especially for projects with a lot of logs.
Instead, select only the values required.
_16-- ❌ Avoid doing this_16select_16 datetime(timestamp),_16 m as metadata -- <- metadata contains many nested keys_16from_16 edge_logs as t_16 cross join unnest(t.metadata) as m;_16_16-- ✅ Do this_16select_16 datetime(timestamp),_16 r.method -- <- select only the required values_16from_16 edge_logs as t_16 cross join unnest(t.metadata) as m_16 cross join unnest(m.request) as r;
Examples and templates
The Logs Explorer includes Templates (available in the Templates tab or the dropdown in the Query tab) to help you get started.
For example, you can enter the following query in the SQL Editor to retrieve each user's IP address:
_10select datetime(timestamp), h.x_real_ip_10from_10 edge_logs_10 cross join unnest(metadata) as m_10 cross join unnest(m.request) as r_10 cross join unnest(r.headers) as h_10where h.x_real_ip is not null and r.method = "GET";
Log source reference
Refer to the full field reference for each available source below. Do note that in order to access each nested key, you would need to perform the necessary unnesting joins