hi4ppl
asked on
Best strategy and way of doing a oracle meta data cloning
Hello experts,
I have a database running live application, I need to create a meta data clone of this database to another oracle database, so my question is what are the best way of going forward to this? and also how to do this? is RMAN is good way of doing it or is it even possible with RMAN to do metadata back of the tables and procedures etc.
I don't need the actual data but the metadata like tables, views, store procedures etc.
database is: oracle 10G
os: sun sparc
thanks for help
I have a database running live application, I need to create a meta data clone of this database to another oracle database, so my question is what are the best way of going forward to this? and also how to do this? is RMAN is good way of doing it or is it even possible with RMAN to do metadata back of the tables and procedures etc.
I don't need the actual data but the metadata like tables, views, store procedures etc.
database is: oracle 10G
os: sun sparc
thanks for help
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Why can't you get that while the system is up and running? It is only hitting the dictionary tables, so user impact would be pretty minimal.
you can only do it with a live database
if you switch off the database, you can't extract anything
on any other computer
install the full admin oracle client and then expdp and exp are also installed
setup the tnsnames.ora in that local client
and you can export all the metadata from that remote computer
you don't even need to get on the sparc server. you can even do it from a windows laptop (or a mac)
if you switch off the database, you can't extract anything
on any other computer
install the full admin oracle client and then expdp and exp are also installed
setup the tnsnames.ora in that local client
and you can export all the metadata from that remote computer
you don't even need to get on the sparc server. you can even do it from a windows laptop (or a mac)
ASKER
Thanks for replay, for expdp, CONTENT=METDATA_ONLY,does this export all procedure and scripts or it only export the tables and view?... as my goal is to export those also what my goal is to have exactly the same copy of database, tables, procedures etc on the second machine...
and I would love to see a command example :D thanks
and I would love to see a command example :D thanks
It should export everything except table data.
As for a command example, there are many in the documentation link that I provided. If you read through the information on the CONTENT parameter, there is actually an example right in the documentation on how to use it.
As for a command example, there are many in the documentation link that I provided. If you read through the information on the CONTENT parameter, there is actually an example right in the documentation on how to use it.
ASKER
Hi,
okay I will try this tomorrow and update here .
regards
okay I will try this tomorrow and update here .
regards
ASKER
Hi,
I used metata only option but it will not export table space command, I have to recreate all table spaces manually? I used in test environment I have not done it in live system as of yet but yeah that is the issue, it will not export the tablespace names
I used metata only option but it will not export table space command, I have to recreate all table spaces manually? I used in test environment I have not done it in live system as of yet but yeah that is the issue, it will not export the tablespace names
Did you include FULL=Y as a parameter? If so, it should do the tablespace creates. If not, try INCLUDE=TABLESPACE in your command line.
consider you only want metdata ...i'd precreate the tablespaces yourself
> after the import they won't be holding data anyway
you could have a 10TB source database,
and a 1GB new database after import
providing the initial extents of the tables are left as 64Kb
of the deferred_segment creation is left on
> after the import they won't be holding data anyway
you could have a 10TB source database,
and a 1GB new database after import
providing the initial extents of the tables are left as 64Kb
of the deferred_segment creation is left on
ASKER