PostgreSQL テーブル一覧をファイル出力する完全ガイド – あらゆる形式と用途に対応

データベース・SQL

「データベースにどんなテーブルがあるか一覧で確認したい」 「ドキュメント作成のためにテーブル情報をエクスポートしたい」 「定期的にテーブル構成を記録しておきたい」

データベースの管理や開発において、テーブル一覧をファイルに出力する機会は意外と多いものです。単純な一覧から、詳細な定義情報まで、用途に応じて様々な出力方法があります。

この記事では、PostgreSQLのテーブル一覧をファイルに出力する方法を、基本から応用まで、実例を交えながら徹底的に解説していきます。

スポンサーリンク

psqlコマンドでテーブル一覧を出力

基本的な出力方法

# シンプルなテーブル一覧をファイルに出力
psql -U postgres -d mydatabase -c "\dt" > table_list.txt

# publicスキーマ以外も含めて出力
psql -U postgres -d mydatabase -c "\dt *.*" > all_tables.txt

# より詳細な情報を含む出力
psql -U postgres -d mydatabase -c "\dt+" > table_details.txt

# CSVフォーマットで出力
psql -U postgres -d mydatabase -c "\dt" --csv > table_list.csv

# HTMLフォーマットで出力
psql -U postgres -d mydatabase -c "\dt" -H > table_list.html

psqlメタコマンドの活用

# \oコマンドで出力先を指定
psql -U postgres -d mydatabase << EOF
\o table_info.txt
\dt
\dt+
\o
EOF

# 複数の情報を一度に出力
psql -U postgres -d mydatabase << EOF > database_info.txt
\echo 'DATABASE INFORMATION REPORT'
\echo '==========================='
\echo ''
\echo 'Table List:'
\dt
\echo ''
\echo 'View List:'
\dv
\echo ''
\echo 'Index List:'
\di
\echo ''
\echo 'Sequence List:'
\ds
EOF

SQLクエリでテーブル一覧を取得

基本的なテーブル一覧取得

-- COPYコマンドで直接ファイルに出力(サーバー側)
COPY (
    SELECT 
        schemaname AS schema,
        tablename AS table_name,
        tableowner AS owner
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    ORDER BY schemaname, tablename
) TO '/tmp/table_list.csv' WITH CSV HEADER;

-- \COPYコマンドで出力(クライアント側)
\COPY (SELECT schemaname, tablename FROM pg_tables WHERE schemaname = 'public') TO 'table_list.csv' WITH CSV HEADER;

詳細情報を含むテーブル一覧

-- テーブルの詳細情報を含む一覧を生成
COPY (
    SELECT 
        n.nspname AS schema_name,
        c.relname AS table_name,
        pg_catalog.pg_get_userbyid(c.relowner) AS owner,
        pg_catalog.pg_size_pretty(pg_catalog.pg_table_size(c.oid)) AS size,
        obj_description(c.oid, 'pg_class') AS description,
        c.reltuples::BIGINT AS estimated_rows,
        pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
        pg_stat_get_last_analyze_time(c.oid) AS last_analyze
    FROM pg_catalog.pg_class c
    LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
    WHERE c.relkind = 'r'
        AND n.nspname NOT IN ('pg_catalog', 'information_schema')
        AND n.nspname !~ '^pg_toast'
    ORDER BY 
        pg_catalog.pg_table_size(c.oid) DESC
) TO '/tmp/table_details.csv' WITH CSV HEADER;

カラム情報を含む詳細出力

-- テーブルとカラムの詳細情報
COPY (
    SELECT 
        t.table_schema,
        t.table_name,
        c.ordinal_position AS column_position,
        c.column_name,
        c.data_type,
        c.character_maximum_length AS max_length,
        c.numeric_precision,
        c.numeric_scale,
        c.is_nullable,
        c.column_default
    FROM information_schema.tables t
    JOIN information_schema.columns c 
        ON t.table_schema = c.table_schema 
        AND t.table_name = c.table_name
    WHERE t.table_type = 'BASE TABLE'
        AND t.table_schema NOT IN ('pg_catalog', 'information_schema')
    ORDER BY t.table_schema, t.table_name, c.ordinal_position
) TO '/tmp/table_columns.csv' WITH CSV HEADER;

プログラムからテーブル一覧をエクスポート

Python スクリプト

#!/usr/bin/env python3
import psycopg2
import csv
import json
import pandas as pd
from datetime import datetime

def export_table_list_to_csv(connection_params, output_file):
    """テーブル一覧をCSVファイルに出力"""
    
    conn = psycopg2.connect(**connection_params)
    cursor = conn.cursor()
    
    query = """
        SELECT 
            schemaname AS schema,
            tablename AS table_name,
            tableowner AS owner,
            pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
            pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
            pg_size_pretty(pg_indexes_size(schemaname||'.'||tablename)) AS indexes_size
        FROM pg_tables
        WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
        ORDER BY schemaname, tablename
    """
    
    cursor.execute(query)
    
    # CSVファイルに書き込み
    with open(output_file, 'w', newline='', encoding='utf-8') as csvfile:
        writer = csv.writer(csvfile)
        
        # ヘッダーを書き込み
        writer.writerow([desc[0] for desc in cursor.description])
        
        # データを書き込み
        writer.writerows(cursor.fetchall())
    
    cursor.close()
    conn.close()
    
    print(f"テーブル一覧を {output_file} に出力しました")

def export_detailed_schema_to_json(connection_params, output_file):
    """詳細なスキーマ情報をJSONファイルに出力"""
    
    conn = psycopg2.connect(**connection_params)
    cursor = conn.cursor()
    
    # テーブル一覧を取得
    cursor.execute("""
        SELECT 
            schemaname,
            tablename
        FROM pg_tables
        WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
        ORDER BY schemaname, tablename
    """)
    
    tables = cursor.fetchall()
    schema_info = []
    
    for schema, table in tables:
        # 各テーブルのカラム情報を取得
        cursor.execute("""
            SELECT 
                column_name,
                data_type,
                character_maximum_length,
                is_nullable,
                column_default
            FROM information_schema.columns
            WHERE table_schema = %s AND table_name = %s
            ORDER BY ordinal_position
        """, (schema, table))
        
        columns = [
            {
                'name': col[0],
                'type': col[1],
                'max_length': col[2],
                'nullable': col[3],
                'default': col[4]
            }
            for col in cursor.fetchall()
        ]
        
        # インデックス情報を取得
        cursor.execute("""
            SELECT 
                indexname,
                indexdef
            FROM pg_indexes
            WHERE schemaname = %s AND tablename = %s
        """, (schema, table))
        
        indexes = [
            {'name': idx[0], 'definition': idx[1]}
            for idx in cursor.fetchall()
        ]
        
        schema_info.append({
            'schema': schema,
            'table': table,
            'columns': columns,
            'indexes': indexes
        })
    
    # JSONファイルに書き込み
    with open(output_file, 'w', encoding='utf-8') as jsonfile:
        json.dump(schema_info, jsonfile, indent=2, ensure_ascii=False)
    
    cursor.close()
    conn.close()
    
    print(f"詳細スキーマ情報を {output_file} に出力しました")

def export_to_excel(connection_params, output_file):
    """テーブル情報をExcelファイルに出力(複数シート)"""
    
    conn = psycopg2.connect(**connection_params)
    
    with pd.ExcelWriter(output_file, engine='openpyxl') as writer:
        # テーブル一覧シート
        tables_query = """
            SELECT 
                schemaname AS schema,
                tablename AS table_name,
                tableowner AS owner
            FROM pg_tables
            WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
            ORDER BY schemaname, tablename
        """
        df_tables = pd.read_sql_query(tables_query, conn)
        df_tables.to_excel(writer, sheet_name='Tables', index=False)
        
        # カラム情報シート
        columns_query = """
            SELECT 
                table_schema,
                table_name,
                column_name,
                data_type,
                is_nullable
            FROM information_schema.columns
            WHERE table_schema NOT IN ('pg_catalog', 'information_schema')
            ORDER BY table_schema, table_name, ordinal_position
        """
        df_columns = pd.read_sql_query(columns_query, conn)
        df_columns.to_excel(writer, sheet_name='Columns', index=False)
        
        # 統計情報シート
        stats_query = """
            SELECT 
                schemaname AS schema,
                tablename AS table_name,
                n_live_tup AS row_count,
                n_dead_tup AS dead_rows,
                last_vacuum,
                last_autovacuum
            FROM pg_stat_user_tables
            ORDER BY schemaname, tablename
        """
        df_stats = pd.read_sql_query(stats_query, conn)
        df_stats.to_excel(writer, sheet_name='Statistics', index=False)
    
    conn.close()
    print(f"Excel形式で {output_file} に出力しました")

# 使用例
if __name__ == "__main__":
    connection_params = {
        'host': 'localhost',
        'database': 'mydatabase',
        'user': 'postgres',
        'password': 'password'
    }
    
    # CSV出力
    export_table_list_to_csv(connection_params, 'tables.csv')
    
    # JSON出力
    export_detailed_schema_to_json(connection_params, 'schema.json')
    
    # Excel出力
    export_to_excel(connection_params, 'database_info.xlsx')

Node.js スクリプト

const { Client } = require('pg');
const fs = require('fs');
const csv = require('csv-writer').createObjectCsvWriter;

async function exportTableListToCSV(connectionConfig, outputFile) {
    const client = new Client(connectionConfig);
    await client.connect();

    try {
        const query = `
            SELECT 
                schemaname AS schema,
                tablename AS table_name,
                tableowner AS owner
            FROM pg_tables
            WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
            ORDER BY schemaname, tablename
        `;

        const result = await client.query(query);

        // CSVライターの設定
        const csvWriter = csv({
            path: outputFile,
            header: [
                {id: 'schema', title: 'Schema'},
                {id: 'table_name', title: 'Table Name'},
                {id: 'owner', title: 'Owner'}
            ]
        });

        // CSVファイルに書き込み
        await csvWriter.writeRecords(result.rows);
        console.log(`テーブル一覧を ${outputFile} に出力しました`);

    } finally {
        await client.end();
    }
}

async function exportDetailedSchemaToJSON(connectionConfig, outputFile) {
    const client = new Client(connectionConfig);
    await client.connect();

    try {
        // テーブル一覧を取得
        const tablesQuery = `
            SELECT 
                schemaname,
                tablename
            FROM pg_tables
            WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
            ORDER BY schemaname, tablename
        `;
        
        const tablesResult = await client.query(tablesQuery);
        const schemaInfo = [];

        for (const table of tablesResult.rows) {
            // カラム情報を取得
            const columnsQuery = `
                SELECT 
                    column_name,
                    data_type,
                    is_nullable
                FROM information_schema.columns
                WHERE table_schema = $1 AND table_name = $2
                ORDER BY ordinal_position
            `;
            
            const columnsResult = await client.query(columnsQuery, 
                [table.schemaname, table.tablename]);

            schemaInfo.push({
                schema: table.schemaname,
                table: table.tablename,
                columns: columnsResult.rows
            });
        }

        // JSONファイルに書き込み
        fs.writeFileSync(outputFile, JSON.stringify(schemaInfo, null, 2));
        console.log(`詳細スキーマ情報を ${outputFile} に出力しました`);

    } finally {
        await client.end();
    }
}

// 使用例
const connectionConfig = {
    host: 'localhost',
    database: 'mydatabase',
    user: 'postgres',
    password: 'password',
    port: 5432,
};

exportTableListToCSV(connectionConfig, 'tables.csv');
exportDetailedSchemaToJSON(connectionConfig, 'schema.json');

フォーマット別の出力方法

HTML形式での出力

#!/bin/bash
# generate_html_report.sh

DB_NAME="mydatabase"
OUTPUT_FILE="database_report.html"

cat > $OUTPUT_FILE << 'HTML_HEADER'
<!DOCTYPE html>
<html>
<head>
    <title>Database Table Report</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; }
        h1 { color: #333; }
        table { border-collapse: collapse; width: 100%; margin-top: 20px; }
        th { background-color: #4CAF50; color: white; padding: 12px; text-align: left; }
        td { padding: 8px; border-bottom: 1px solid #ddd; }
        tr:hover { background-color: #f5f5f5; }
        .timestamp { color: #666; font-size: 0.9em; margin-top: 20px; }
    </style>
</head>
<body>
    <h1>Database Table Report</h1>
    <p class="timestamp">Generated on: 
HTML_HEADER

date >> $OUTPUT_FILE

cat >> $OUTPUT_FILE << 'HTML_MIDDLE'
    </p>
    <h2>Table List</h2>
HTML_MIDDLE

psql -U postgres -d $DB_NAME -H -c "
    SELECT 
        schemaname AS schema,
        tablename AS table_name,
        tableowner AS owner,
        pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    ORDER BY schemaname, tablename
" >> $OUTPUT_FILE

cat >> $OUTPUT_FILE << 'HTML_FOOTER'
</body>
</html>
HTML_FOOTER

echo "HTMLレポートを $OUTPUT_FILE に生成しました"

Markdown形式での出力

-- Markdown形式でテーブル一覧を生成
COPY (
    SELECT 
        '| ' || schemaname || ' | ' || 
        tablename || ' | ' || 
        tableowner || ' | ' || 
        pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) || ' |' AS markdown_row
    FROM (
        SELECT 'Schema' AS schemaname, 'Table Name' AS tablename, 'Owner' AS tableowner, 'Size' AS size
        UNION ALL
        SELECT '---' AS schemaname, '---' AS tablename, '---' AS tableowner, '---' AS size
        UNION ALL
        SELECT 
            schemaname,
            tablename,
            tableowner,
            pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename))
        FROM pg_tables
        WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
        ORDER BY 1, 2
    ) AS t
) TO '/tmp/table_list.md' WITH (FORMAT text);

XML形式での出力

-- XML形式でテーブル情報を出力
COPY (
    SELECT xmlelement(
        name database,
        xmlelement(name generated_at, CURRENT_TIMESTAMP),
        xmlagg(
            xmlelement(
                name table,
                xmlelement(name schema, schemaname),
                xmlelement(name name, tablename),
                xmlelement(name owner, tableowner)
            )
        )
    )
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
) TO '/tmp/tables.xml';

定期的な自動出力の設定

cronジョブでの自動化

#!/bin/bash
# backup_table_list.sh

# 設定
DB_NAME="mydatabase"
DB_USER="postgres"
OUTPUT_DIR="/var/backups/table_lists"
DATE=$(date +%Y%m%d_%H%M%S)

# ディレクトリ作成
mkdir -p $OUTPUT_DIR

# テーブル一覧を出力
psql -U $DB_USER -d $DB_NAME -c "
    SELECT 
        schemaname,
        tablename,
        tableowner,
        pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    ORDER BY schemaname, tablename
" --csv > "$OUTPUT_DIR/tables_$DATE.csv"

# 古いファイルを削除(30日以上前)
find $OUTPUT_DIR -name "tables_*.csv" -mtime +30 -delete

echo "テーブル一覧を $OUTPUT_DIR/tables_$DATE.csv に保存しました"
# crontabに追加(毎日午前2時に実行)
0 2 * * * /path/to/backup_table_list.sh

PostgreSQL関数での自動出力

-- テーブル一覧を定期的に記録する関数
CREATE OR REPLACE FUNCTION export_table_list_to_file()
RETURNS void AS $$
DECLARE
    output_path TEXT;
    query TEXT;
BEGIN
    -- 出力パスを生成(日付付き)
    output_path := '/tmp/table_list_' || TO_CHAR(CURRENT_DATE, 'YYYYMMDD') || '.csv';
    
    -- クエリを構築
    query := format('
        COPY (
            SELECT 
                current_timestamp AS export_time,
                schemaname,
                tablename,
                tableowner,
                pg_size_pretty(pg_total_relation_size(schemaname||''.''||tablename)) AS size
            FROM pg_tables
            WHERE schemaname NOT IN (''pg_catalog'', ''information_schema'')
            ORDER BY schemaname, tablename
        ) TO %L WITH CSV HEADER',
        output_path
    );
    
    -- 実行
    EXECUTE query;
    
    RAISE NOTICE 'テーブル一覧を % に出力しました', output_path;
END;
$$ LANGUAGE plpgsql;

-- pg_cronを使用した定期実行(拡張機能のインストールが必要)
CREATE EXTENSION IF NOT EXISTS pg_cron;

-- 毎日午前3時に実行
SELECT cron.schedule(
    'export_table_list',
    '0 3 * * *',
    'SELECT export_table_list_to_file()'
);

大規模データベース向けの最適化

並列処理での高速出力

#!/usr/bin/env python3
import psycopg2
import concurrent.futures
import csv
from datetime import datetime

def export_schema_tables(connection_params, schema_name, output_dir):
    """特定スキーマのテーブル情報を出力"""
    conn = psycopg2.connect(**connection_params)
    cursor = conn.cursor()
    
    cursor.execute("""
        SELECT 
            tablename,
            tableowner,
            pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size
        FROM pg_tables
        WHERE schemaname = %s
        ORDER BY tablename
    """, (schema_name,))
    
    output_file = f"{output_dir}/{schema_name}_tables.csv"
    
    with open(output_file, 'w', newline='') as csvfile:
        writer = csv.writer(csvfile)
        writer.writerow(['Table', 'Owner', 'Size'])
        writer.writerows(cursor.fetchall())
    
    cursor.close()
    conn.close()
    
    return f"Schema {schema_name} exported to {output_file}"

def parallel_export_all_schemas(connection_params, output_dir):
    """全スキーマを並列処理で出力"""
    conn = psycopg2.connect(**connection_params)
    cursor = conn.cursor()
    
    # スキーマ一覧を取得
    cursor.execute("""
        SELECT DISTINCT schemaname 
        FROM pg_tables
        WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    """)
    
    schemas = [row[0] for row in cursor.fetchall()]
    cursor.close()
    conn.close()
    
    # 並列処理で各スキーマを出力
    with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
        futures = []
        for schema in schemas:
            future = executor.submit(
                export_schema_tables, 
                connection_params, 
                schema, 
                output_dir
            )
            futures.append(future)
        
        # 結果を収集
        for future in concurrent.futures.as_completed(futures):
            print(future.result())

差分出力

-- テーブル変更履歴を記録するテーブル
CREATE TABLE IF NOT EXISTS table_change_log (
    id SERIAL PRIMARY KEY,
    check_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    schema_name TEXT,
    table_name TEXT,
    action TEXT, -- 'CREATED', 'DROPPED', 'MODIFIED'
    details JSONB
);

-- 差分を検出して記録する関数
CREATE OR REPLACE FUNCTION detect_table_changes()
RETURNS TABLE(
    action TEXT,
    schema_name TEXT,
    table_name TEXT,
    details TEXT
) AS $$
DECLARE
    last_check TIMESTAMP;
BEGIN
    -- 最後のチェック時刻を取得
    SELECT MAX(check_date) INTO last_check FROM table_change_log;
    
    -- 新規作成されたテーブル
    RETURN QUERY
    SELECT 
        'CREATED'::TEXT,
        schemaname::TEXT,
        tablename::TEXT,
        'Table created after ' || COALESCE(last_check::TEXT, 'initial check')
    FROM pg_tables t
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
        AND NOT EXISTS (
            SELECT 1 FROM table_change_log l
            WHERE l.schema_name = t.schemaname 
                AND l.table_name = t.tablename
                AND l.action IN ('CREATED', 'MODIFIED')
        );
    
    -- 削除されたテーブル(最後のログに存在するが現在は存在しない)
    RETURN QUERY
    SELECT DISTINCT
        'DROPPED'::TEXT,
        l.schema_name::TEXT,
        l.table_name::TEXT,
        'Table dropped'::TEXT
    FROM table_change_log l
    WHERE l.action != 'DROPPED'
        AND NOT EXISTS (
            SELECT 1 FROM pg_tables t
            WHERE t.schemaname = l.schema_name 
                AND t.tablename = l.table_name
        );
    
    -- 結果をログに記録
    INSERT INTO table_change_log (schema_name, table_name, action)
    SELECT 
        schema_name,
        table_name,
        action
    FROM detect_table_changes();
    
END;
$$ LANGUAGE plpgsql;

-- 差分を出力
COPY (
    SELECT * FROM detect_table_changes()
) TO '/tmp/table_changes.csv' WITH CSV HEADER;

ドキュメント生成用の高度な出力

データディクショナリの生成

-- 完全なデータディクショナリをCSVで出力
COPY (
    WITH table_info AS (
        SELECT 
            t.table_schema,
            t.table_name,
            obj_description(c.oid) AS table_comment,
            pg_size_pretty(pg_total_relation_size(c.oid)) AS table_size,
            c.reltuples::BIGINT AS row_count
        FROM information_schema.tables t
        JOIN pg_class c ON c.relname = t.table_name
        JOIN pg_namespace n ON n.oid = c.relnamespace AND n.nspname = t.table_schema
        WHERE t.table_type = 'BASE TABLE'
            AND t.table_schema NOT IN ('pg_catalog', 'information_schema')
    ),
    column_info AS (
        SELECT 
            c.table_schema,
            c.table_name,
            c.column_name,
            c.ordinal_position,
            c.data_type,
            c.character_maximum_length,
            c.numeric_precision,
            c.numeric_scale,
            c.is_nullable,
            c.column_default,
            col_description(pgc.oid, c.ordinal_position) AS column_comment
        FROM information_schema.columns c
        JOIN pg_class pgc ON pgc.relname = c.table_name
        JOIN pg_namespace n ON n.oid = pgc.relnamespace AND n.nspname = c.table_schema
        WHERE c.table_schema NOT IN ('pg_catalog', 'information_schema')
    )
    SELECT 
        ti.table_schema,
        ti.table_name,
        ti.table_comment,
        ti.table_size,
        ti.row_count,
        ci.column_name,
        ci.ordinal_position,
        ci.data_type,
        ci.character_maximum_length,
        ci.is_nullable,
        ci.column_default,
        ci.column_comment
    FROM table_info ti
    JOIN column_info ci ON ti.table_schema = ci.table_schema 
        AND ti.table_name = ci.table_name
    ORDER BY ti.table_schema, ti.table_name, ci.ordinal_position
) TO '/tmp/data_dictionary.csv' WITH CSV HEADER;

ER図用のDOT形式出力

#!/usr/bin/env python3
import psycopg2

def generate_dot_file(connection_params, output_file):
    """ER図用のDOTファイルを生成"""
    conn = psycopg2.connect(**connection_params)
    cursor = conn.cursor()
    
    # 外部キー関係を取得
    cursor.execute("""
        SELECT
            tc.table_schema,
            tc.table_name,
            kcu.column_name,
            ccu.table_schema AS foreign_table_schema,
            ccu.table_name AS foreign_table_name,
            ccu.column_name AS foreign_column_name
        FROM information_schema.table_constraints AS tc
        JOIN information_schema.key_column_usage AS kcu
            ON tc.constraint_name = kcu.constraint_name
        JOIN information_schema.constraint_column_usage AS ccu
            ON ccu.constraint_name = tc.constraint_name
        WHERE tc.constraint_type = 'FOREIGN KEY'
            AND tc.table_schema NOT IN ('pg_catalog', 'information_schema')
    """)
    
    relationships = cursor.fetchall()
    
    # テーブル情報を取得
    cursor.execute("""
        SELECT DISTINCT
            table_schema,
            table_name
        FROM information_schema.tables
        WHERE table_type = 'BASE TABLE'
            AND table_schema NOT IN ('pg_catalog', 'information_schema')
    """)
    
    tables = cursor.fetchall()
    
    # DOTファイルを生成
    with open(output_file, 'w') as f:
        f.write('digraph ER {\n')
        f.write('  rankdir=LR;\n')
        f.write('  node [shape=record];\n\n')
        
        # テーブルを定義
        for schema, table in tables:
            f.write(f'  "{schema}.{table}" [label="{schema}.{table}"];\n')
        
        f.write('\n')
        
        # リレーションシップを定義
        for rel in relationships:
            f.write(f'  "{rel[0]}.{rel[1]}" -> "{rel[3]}.{rel[4]}" ')
            f.write(f'[label="{rel[2]}->{rel[5]}"];\n')
        
        f.write('}\n')
    
    cursor.close()
    conn.close()
    
    print(f"DOTファイルを {output_file} に生成しました")
    print(f"Graphvizでイメージを生成: dot -Tpng {output_file} -o er_diagram.png")

セキュリティとベストプラクティス

機密情報のマスキング

-- 機密テーブルをマスクして出力
COPY (
    SELECT 
        CASE 
            WHEN schemaname IN ('hr', 'finance') THEN '***RESTRICTED***'
            ELSE schemaname
        END AS schema,
        CASE 
            WHEN tablename LIKE '%password%' 
                OR tablename LIKE '%secret%' 
                OR tablename LIKE '%token%' THEN '***SENSITIVE***'
            ELSE tablename
        END AS table_name,
        tableowner AS owner
    FROM pg_tables
    WHERE schemaname NOT IN ('pg_catalog', 'information_schema')
    ORDER BY schemaname, tablename
) TO '/tmp/table_list_masked.csv' WITH CSV HEADER;

アクセス権限の確認付き出力

-- アクセス権限情報を含めて出力
CREATE OR REPLACE FUNCTION export_tables_with_permissions()
RETURNS TABLE(
    schema_name TEXT,
    table_name TEXT,
    owner TEXT,
    select_users TEXT[],
    insert_users TEXT[],
    update_users TEXT[],
    delete_users TEXT[]
) AS $$
BEGIN
    RETURN QUERY
    SELECT 
        n.nspname::TEXT,
        c.relname::TEXT,
        pg_get_userbyid(c.relowner)::TEXT,
        ARRAY(
            SELECT grantee::TEXT
            FROM information_schema.table_privileges
            WHERE table_schema = n.nspname 
                AND table_name = c.relname 
                AND privilege_type = 'SELECT'
        ),
        ARRAY(
            SELECT grantee::TEXT
            FROM information_schema.table_privileges
            WHERE table_schema = n.nspname 
                AND table_name = c.relname 
                AND privilege_type = 'INSERT'
        ),
        ARRAY(
            SELECT grantee::TEXT
            FROM information_schema.table_privileges
            WHERE table_schema = n.nspname 
                AND table_name = c.relname 
                AND privilege_type = 'UPDATE'
        ),
        ARRAY(
            SELECT grantee::TEXT
            FROM information_schema.table_privileges
            WHERE table_schema = n.nspname 
                AND table_name = c.relname 
                AND privilege_type = 'DELETE'
        )
    FROM pg_class c
    JOIN pg_namespace n ON n.oid = c.relnamespace
    WHERE c.relkind = 'r'
        AND n.nspname NOT IN ('pg_catalog', 'information_schema');
END;
$$ LANGUAGE plpgsql;

-- 結果をファイルに出力
COPY (
    SELECT * FROM export_tables_with_permissions()
) TO '/tmp/tables_with_permissions.csv' WITH CSV HEADER;

まとめ:用途に応じた最適な出力方法を選ぼう

PostgreSQLのテーブル一覧をファイル出力する方法について、様々なアプローチを解説してきました。

重要なポイント:

  1. psqlコマンドが最も簡単で基本的な方法
  2. COPYコマンドで直接ファイル出力が可能
  3. プログラムから詳細な制御とフォーマット変換
  4. 定期的な自動出力で変更管理
  5. セキュリティを考慮したマスキングと権限確認

用途別の推奨方法:

  • 単純な一覧 → psqlコマンドでCSV出力
  • 詳細情報 → SQLクエリでCOPY出力
  • ドキュメント生成 → プログラムでHTML/Markdown生成
  • 定期レポート → cronジョブやpg_cronで自動化
  • 大規模DB → 並列処理や差分出力

適切な方法を選択して、効率的なデータベース管理を実現しましょう!

コメント

タイトルとURLをコピーしました