English 中文(简体)
JPA 2 Delete/Insert order from Metamodel
原标题:

I m trying to use the JPA2 metadata to figure out the order to insert/delete rows from a database so constraints are not an issue (to be used later in Java code). This is part of a backup/restore approach using JPA.

Here s my approach:

  1. Group tables by amount of relationships/constraints (Only one-to-many and one-to-one are considered)
  2. Tables with zero instances (as per #1) can have records added/deleted without issue
  3. Tables with one instance can have records added/deleted without issue as long as the related table is already "ready"

Ready by ready I mean that all its related tables records are populated so foreign keys are valid for insert or that there are no other tables referencing to records in this table.

I m sure it ll be some kind of recursive approach but I got stuck. Any help is more than welcomed.

Here s the code so far:

/**
 * Get the execution order from the EntityManager meta data model.
 *
 * This will fail if the EntityManager is not JP2 compliant
 * @param em EntityManager to get the metadata from
 * @return ArrayList containing the order to process tables
 */
protected static ArrayList<String> getProcessingOrder(EntityManager em) {
    ArrayList<String> tables = new ArrayList<String>();
    //This holds the amount of relationships and the tables with that same amount
    HashMap<Integer, ArrayList<String>> tableStats = new HashMap<Integer, ArrayList<String>>();
    //This holds the table and the tables referenced by it
    HashMap<String, ArrayList<String>> references = new HashMap<String, ArrayList<String>>();
    for (EntityType et : em.getMetamodel().getEntities()) {
        Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.FINER, et.getName());
        int amount = 0;
        Iterator<SingularAttribute> sIterator = et.getSingularAttributes().iterator();
        while (sIterator.hasNext()) {
            SingularAttribute next = sIterator.next();
            switch (next.getPersistentAttributeType()) {
                case BASIC:
                case ELEMENT_COLLECTION:
                case EMBEDDED:
                case ONE_TO_MANY:
                case ONE_TO_ONE:
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.FINER,
                            "Ignoring: {0}", next.getName());
                    break;
                case MANY_TO_MANY:
                case MANY_TO_ONE:
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.INFO,
                            "{3} has a {2} relationship: {0} with: {1}",
                            new Object[]{next.getName(), next.getBindableJavaType(),
                                next.getPersistentAttributeType().name(), et.getName()});
                    if (!references.containsKey(et.getName())) {
                        references.put(et.getName(), new ArrayList<String>());
                    }
                    references.get(et.getName()).add(next.getBindableJavaType().getSimpleName());
                    amount++;
                    break;
                default:
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.SEVERE,
                            "Unexpected value: {0}", next.getName());
                    break;
            }
        }
        Iterator<PluralAttribute> pIterator = et.getPluralAttributes().iterator();
        while (pIterator.hasNext()) {
            PluralAttribute next = pIterator.next();
            switch (next.getPersistentAttributeType()) {
                case BASIC:
                case ELEMENT_COLLECTION:
                case EMBEDDED:
                case ONE_TO_MANY:
                case MANY_TO_MANY:
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.FINER,
                            "Ignoring: {0}", next.getName());
                    break;
                case MANY_TO_ONE:
                case ONE_TO_ONE:
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.INFO,
                            "{3} has a {2} relationship: {0} with: {1}",
                            new Object[]{next.getName(), next.getBindableJavaType(),
                                next.getPersistentAttributeType().name(), et.getName()});
                    if (!references.containsKey(et.getName())) {
                        references.put(et.getName(), new ArrayList<String>());
                    }
                    references.get(et.getName()).add(next.getBindableJavaType().getSimpleName());
                    amount++;
                    break;
                default:
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.SEVERE,
                            "Unexpected value: {0}", next.getName());
                    break;
            }
        }
        if (!tableStats.containsKey(amount)) {
            tableStats.put(amount, new ArrayList<String>());
        }
        tableStats.get(amount).add(et.getName());
    }
    Iterator<String> iterator = references.keySet().iterator();
    while (iterator.hasNext()) {
        String next = iterator.next();
        Iterator<String> iterator1 = references.get(next).iterator();
        StringBuilder refs = new StringBuilder();
        while (iterator1.hasNext()) {
            refs.append(iterator1.next()).append("
");
        }
        Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.FINER, "References for {0}:
{1}", new Object[]{next, refs.toString()});
    }
    //Need to sort entities with relationships even further
    ArrayList<String> temp = new ArrayList<String>();
    for (Entry<Integer, ArrayList<String>> e : tableStats.entrySet()) {
        if (e.getKey() > 0) {
            Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.INFO, "Tables with {0} references", e.getKey());
            for (String t : e.getValue()) {
                //Check the relationships of the tables
                //Here s where I need help
                boolean ready = true;
                for (String ref : references.get(t)) {
                    if (!temp.contains(ref)) {
                        Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.INFO,
                                "{0} is not ready. Referenced table {1} is not ready yet", new Object[]{t, ref});
                        ready = false;
                    }
                }
                if (ready) {
                    Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.INFO, "{0} is ready.", t);
                    temp.add(t);
                }
            }
            //-------------------------------------------------------
        } else {
            temp.addAll(e.getValue());
        }
    }
    for (Entry<Integer, ArrayList<String>> e : tableStats.entrySet()) {
        Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.FINER,
                "Amount of relationships: {0}", e.getKey());
        StringBuilder list = new StringBuilder();
        for (String t : e.getValue()) {
            list.append(t).append("
");
        }
        Logger.getLogger(XincoBackupManager.class.getSimpleName()).log(Level.FINER, list.toString());
    }
    tables.addAll(temp);
    return tables;
}
问题回答

I d approach this problem with database metadata from JDBC.

The following methods from java.sql.DatabaseMetadata are to be used here :

// to get the tables
getTables(String catalog, String schemaPattern, String tableNamePattern, String[] types) 

// to get the reference to the table
public ResultSet getExportedKeys(String catalog,
                                 String schema,
                                 String table)
                          throws SQLException

I have used this approach in a few applications and it works quite fine.

Although this approach doesn t follow the JPA metamodel usage, I believe operating on the JDBC metadata level is more appropriate given your problem.

As there can be cyclic dependencies which are difficult to handle via such a foreign key dependency graph you could alternatively

for delete

  • disable constraints
  • delete content
  • enable constraints

for add

  • disable constraints
  • add content
  • enable constraints




相关问题
Backing up my database is taking too long

On a windows mobile unit, the software I m working on relies on a sdf file as it s database. The platform that the software is targeted towards is "less than optimal" and hard resets every once and a ...

Cognos LAE Backup Automation (Batch File?)

Within Cognos 7.4 security.. one would create an LAE file to export all their users... directions here... http://www.cognos-install.co.uk/articles/backups/access_manager_export_to_lae.asp Now you ...

backup SQL Server 2005 database without data

I have one stored procedure to backup the database. It will backup metadata as well as data. Is there any option to back up the database with out data. ie, back up only the schema (Empty tables). I ...

SQL Server 2008 Auto Backup

We want to have our test servers databases updated from our production server databases on a nightly basis to ensure we re developing on the most recent data. We, however, want to ensure that any fn, ...

mysql dump stops when a row is updated

When i try to get a dump of a mysql database, the dump stops when a row in it is updated. How can i prevent that? I already tried following options with no result: -f (forces continu even when error)...

热门标签