Dear All,
Proper steps of client copy from Production server to
Quality Server.
please help
Have fun online with music videos, cool games, IM and more. Check it out!
Tutorials
Updated May 18, 2018
Dear All,
Proper steps of client copy from Production server to
Quality Server.
please help
Have fun online with music videos, cool games, IM and more. Check it out!
Comments
What is your alternative if the database is huge? Otherwise the client copy would take days to complete especially if the PRD & QUA is on separate box.
The client copy , SCCL can be used but beaware if the client on PRD is BIG.
you could take a tape backup and restore it, as a seperate instance. or overwrite the same instance and then do a local client copy. But would like to know if there are other ways. or tricks.
The client copy , SCCL can be used but beaware if the client on PRD is BIG.
That was what I have been doing all along , using the backup of the PRD and restore onto QUA system. I thought maybe you or others out there may have a more efficient way . This refresh of the QUA data base usually took me 8 to 9 hours just to restore the data then another 2 hours for re-adjust other parameters before I can bring the R3 up on the QUA.
you could take a tape backup and restore it, as a seperate instance. or overwrite the same instance and then do a local client copy. But would like to know if there are other ways. or tricks.
The client copy , SCCL can be used but beaware if the client on PRD is BIG.
The main problems will be your transport ..etc...Butif you use tape backup you will lose what you have on QUA etc
let me know
That was what I have been doing all along , using the backup of the PRD and restore onto QUA system. I thought maybe you or others out there may have a more efficient way . This refresh of the QUA data base usually took me 8 to 9 hours just to restore the data then another 2 hours for re-adjust other parameters before I can bring the R3 up on the QUA.
you could take a tape backup and restore it, as a seperate instance. or overwrite the same instance and then do a local client copy. But would like to know if there are other ways. or tricks.
The client copy , SCCL can be used but beaware if the client on PRD is BIG.
The client copy , SCCL can be used but beaware if the client on PRD is BIG.
Hi Guys,
on the same note. does anyone know what should be done to background jobs that using trfc,
lets say users "USR001". and if I dont want that user on QA. but I want the jobs to run. what is the easy way to accomplish this.
Hi guys,
it depend on system size and available time.
For small system you can do remote client copy.
Another option is to make client export on PRD system, then client import in Q system.
For the large system is not any other way - just do system copy.
in few words:
make backup, remove Q system from transport system and from CUA, resore on Q system, re-create control files - to change the SID( Oracle), startup DB, several post-copy steps.
Here is plan that i follow :
Generally –
follow note 147243. The difference in this procedure is that DB Load is not interrupted as is proposed in the note, but I wait for the initial installation to fully complete and then do the next steps.
1. Adjust memory parameters (Oracle, SAP) and page file of source system. If necessary adjust also number of work processes. This step is optional. Most often it is not done, instead of it the adjustments of the profiles are done later in the target system.
2. Trace the control file Control
4. Create new user with admin rights (put this user in ORA_DBA group)
5. Logon as this user (local/domain) and perform a new
installation as per inst. guide
6.Do this only if this is a second SAP instance installed on the same host:
See note 576919 (Ora-12505). Oracle listener is changed during the installation. Adjust listener.ora
-if system fails on DBCONNECTTEST step (can occur if you install more than one instance on the same host), check if environment variable Local is defined. If it is, it should have the correct value for the SID and it must be defined as User variable, not as System variable. Also restart the computer. Then start the database of the new SID.
- Terminal services also can impact this error – note 441518. Note 556232 explains the environment settings.
-
If error occurs on DIPGNTAB_NT see note 162266 and especially note 400241 (ora-1403 or ora 1017)
7. Patch Oracle of the target system, if necessary (to have the same patch level as in the source system)
8. Update Kernel of Target system (use the newest kernel available)
9. Stop Oracle
13. Copy Oracle init
15. Start Oracle Services
16. Modify Control
SAP
19. If the system does not start, delete old OPS$ user and create it again (Note 50088) – only for R/3 4.6C
Only for BW (or system based on WAS 6.20):
-Use note 659509 in combination with 400241. Use the newest oradbusr.sql script to create new OPS$ user– it is attached to current version of note 50088. Change password/owner of SAPUSER table as described in 659509 – use old SID for the “ops$
insert into ops$
GRANT CONNECT, RESOURCE TO “OPS$
-Give to the user default and temporary tablespace, for example:
ALTER USER "OPS$GRATHDB1GRBADM" DEFAULT TABLESPACE PSAPIPWUSR
TEMPORARY TABLESPACE PSAPTEMP IDENTIFIED EXTERNALLY;
-Grant the necessary roles to new SAP
SAPGRB;
-Apply note 534765 to change dbs_ora_schema environment to the old SID (SID which owns SAP tables in the schema)
-Create OPS$SAPService user (example):
create user "OPS$GRATHDB1SAPSERVICEGRB" DEFAULT TABLESPACE SYSTEM
TEMPORARY TABLESPACE SYSTEM IDENTIFIED EXTERNALLY;
-Grant necessary rights to OPS$SAPService user:
GRANT CONNECT, RESOURCE, SAPDBA TO "OPS$GRATHDB1SAPSERVICEGRB";
-Create the synonym:
CREATE SYNONYM "OPS$GRATHDB1SAPSERVICEGRB".SAPUSER FOR
"OPS$SAPBWIPWADM".SAPUSER;
-Grant select update onto the SAPUSER table for SAPService user:
GRANT SELECT, UPDATE ON "OPS$SAPBWIPWADM".SAPUSER TO "OPS$GRATHDB1SAPSERVICEGRBâ€Â;
-Drop the old synonym:
DROP SYNONYM "OPS$SAPBWSAPSERVICEIPW".SAPUSER;
-Start SAP system.
20. If the system does not start yet, apply note 8179
21. Post Implementation steps
These steps are derived from Homogeneous copy guide, section “post copy activitiesâ€Â
-Delete all irrelevant in SM59
-Delete old CUA settings, if exists (SCUA, BD64)
-SPAD – adjust printers
-Delete entries in tables:
sqlplus
connect sapr3/sap;
delete from DBSTATHORA;
delete from DBSTAIHORA;
delete from DBSTATIORA;
delete from DBSTATTORA;
delete from MONI;
delete from PAHI;
delete from OSMON;
delete from DBSNP;
delete from SDBAH;
delete from SDBAD;
delete from SDBAP;
delete from SDBAR;
delete from DDLOG;
delete from TPFET;
delete from TPFHT;
delete from TLOCK;
commit;
exit;
For systems based on WAS 6.20 check in Homogeneous Copy Guide for the tables, which entries must be deleted.
-Delete all unnecessary in SM37
-Execute SICK, SM28 (Installation check)
-SE06 (Choose DB Copy)
Start transaction SE06 and choose ‘Database copy or migration’. Click now the button Processing after installation [Execute].
Accept the given source system.
SAP will now ask if the originals have to changed from source system name to target system name. Only answer this question with yes if this installation doesn’t stay within the same landscape.
-SE38 - execute report RSBTCDEL (mark field delete with force mode). This deletes old batch jobs by your criteria
-SP12 – Tempse Consistency
-Execute DB02
-Configure STMS
-RZ10 – import new profiles
-SE61 – adapt the logon text
-Adapt the picture after logon
-Delete unnecessary clients
-Import necessary requests
-Add the system CUA ?
-Install Documentation
Additional steps for BW only – follow closely note 184754
a)In the target BW, change the contents of field "target host" in all RFC connections (destinations) for R/3 and DataMart source systems (Transaction SM59) to a nonsensical, nonexistent address (such as `nowhere`). Then delete ALL R/3 and DataMart source systems in the Administrator Workbench source system tree. Caution: This step deletes all PSA tables of these source systems - the data are lost! A message is generated stating that the source system cannot be accessed (since you deleted the host of the RFC connection). Select "Ignore".
Confirm on the request, until all transfer
structures are not deleted – track this on “Transfer structureâ€Â. This operation deletes the transfer structures and transfer rules for the affected sourse systems. It asks also if you want to delete RFC destinations and Logical systems of the source systems (SALE).
“MySelf†Logical system (based on old
b)DO NOT create new Logical system (e.g. GRGRB400). In BDLS step this will be done automatically by the report RBDLSMAP
c)Follow note 121163
d)Before running BDLS, adapt ROLLBACK segments (if necessary)
That was what I have been doing all along , using the backup of the PRD and restore onto QUA system. I thought maybe you or others out there may have a more efficient way . This refresh of the QUA data base usually took me 8 to 9 hours just to restore the data then another 2 hours for re-adjust other parameters before I can bring the R3 up on the QUA.
Yes, that was one of the item my check list prior to each refresh. I have to ensure that all the transports sitting on QUA must be promoted to PRD before the backup is taken for the refresh. It is a hassle, that was reason I ask if there was a better way. I thought of client copy, but with the size of the client (over 200GB) it would not be workable. Thanks for the advise & discussion.
The main problems will be your transport ..etc...Butif you use tape backup you will lose what you have on QUA etc
let me know
That was what I have been doing all along , using the backup of the PRD and restore onto QUA system. I thought maybe you or others out there may have a more efficient way . This refresh of the QUA data base usually took me 8 to 9 hours just to restore the data then another 2 hours for re-adjust other parameters before I can bring the R3 up on the QUA.
you could take a tape backup and restore it, as a seperate instance. or overwrite the same instance and then do a local client copy. But would like to know if there are other ways. or tricks.
The client copy , SCCL can be used but beaware if the client on PRD is BIG.
Hi again,
yes there is another way to do refres
of the data .
BUT is more complex.....
You must to install an MWB system.
The Migration Workbench (MWB) allows
to extract, according to specific rules, a subset of data from a
source system into a target system.
This can be used to extract a reduced amount of test data from a
production system into a test system.
The extraction processes run on a dedicated server (the MWB server)
which connects via RFC to the source
and the target systems. Since most of the space is required for
transactional data, the idea is to only
reduce the amount of transactional data.