The Update Manager Dashboard provides a central location for all update tasks. The dashboard provides analytics for PUM (PeopleSoft Update Manager) that assist in determining your current level of maintenance. The various pivot grids provided on the dashboard, provide access into the maintenance logs and bug information for each database uploaded to the PeopleSoft Update Image.
While there is no question that it is a very useful feature specially the About PeopleSoft Image page which allows you to display information about the PeopleSoft Image.
The interface is somewhat lacking specially as drilling down to the pre/post requisite bugs requires multiple click. The same process can easily be achieve via excel and macro. Below is the comparison between the excel macro and the About Update Image utilities
After upgrading to peopletools 8.55, our application engine started failing with a variety of cache error messages.It is a tricky scenario given our setup which is as follows:
- The failing psae process are triggered via command line using ksh wrapper script with psae commands
- Upgrade from 8.53 to 8.55 tools version
- Filesystem is being shared between 2 Linux servers running multiple process scheduler instance
Took as multiple cache parameters change in the process scheduler before identifying the issue. Turns out that the compatibility with the SAN filesystem is the culprit and has something to do with the filesystem locking mechanism for SAN v3
- Meta-SQL error at or near position 177
- PSPAL: Abort: Unexpected signal received
- errorno = 2-No such file or directory
- errorno = 116-Stale file handle
- errorno = 10-No child processes
Active Data Guard(ADG) is one of the key feature introduced in PeopleTools 8.51, steps to configure ADG is discussed in the Peoplebooks Data Management section:
I made the recommendation to configure ADG in order to maximize the PeopleTools 8.53 feature as part of an Ongoing Upgrade project. Besides the PeopleTools documentation, it is also one of the highlight in the Release value proposition for PeopleTool 8.53 especifically the ability to offload processing to standby database via ADG. Imagine my surprise when we raise an SR to oracle to address an issue we are encountering when running PS/nVision on the standby database and was advised that the functionality is not supported by ADG contradicting the previously published document.
Not entirely convinced by the information provided, I performed some troubleshooting and managed to make PS/nVision run against the standby database. It turned out that additional process are required besides the one outlined in PeopleBooks. The highlevel wil be:
1. create DBlink- script provided by oracle createdblinktoprimary.sql
2. create local synonyms- scripts provided by oracle createlocalsynonyms.sql
3. create remote synonyms- scrips provided by oracle createremotesynonyms.sql
The missing steps for DML redirection requires a trigger to be created which switch the primary schema user to standby schema user each time it login to the standby database. The details can be found in
To maintain environment configuration, we are using the classic oracle imp/exp process to extract and load data from and to the database. Datapump as oracle presents it is an improvement to the classic imp/exp and is known to be fast in comparison. This seems to be a good CI for the existing process however after trying out the tool, i discovered that impdp/expdp will actually not allow you to perform the extract/load of tables outside the database host. something that can be done with imp/exp.
overall, i’m disappointed and is now working on tuning the imp/exp process instead.
We had an incident recently involving integration broker(IB) node on production environment getting updated without performing modification on both online and the IBconfig.properties file. I’m not sure if other user encountered similar issue but i would say it can actually occur if you are maintaining multiple peoplesoft environment and has something to do with how Peoplesoft maintains the IB node configuration.
To recreate the issue/scenario encountered, you can perform the following steps:
1. you will need 2 peoplesoft environment. one to perform the change(source) and another which you want to keep intact(target).
2. login to source and navigate to IB Gateway configuration page. update or create a new Gateway using the target url as the Gateway properties. Save changes.
3. again on the source, navigate to the IB node configuration and perform changes on the IB node. First, ensure that the IB node is using the Gateway that was updated or created in step 2. Then update the IB node configuration by adding additional appserver, etc. Save and ping the IB node.
5. Finally, login to your target environment and verify if there are any changes in the IB node settings.
While updating IB node requires password to be provided, it is important to note that IB node is connected to the Gateway properties. Any update to the IB node which uses the gateway will reflect on the target environment instead of the source.
While this can be avoided by ensuring that the Gateway/IB node setup are not shared across multiple environment unless required as part of integration. I can see similar issue occurring specially around database refreshes/cloning.
From time to time we need to determine the version of the software when performing upgrades and patching, i’m compiling the list of command line to determine the software in unix for easy reference
1. version of tuxedo installed
export TUXDIR=<location of tuxedo installation>
2. version of java
export JAVADIR=<location of java installation>
3. version of peopletools
4. version of oracle client
6. Weblogic version
$WL_HOME/server/lib/java -cp weblogic.jar weblogic.version
someone ask me the other day how to run a peoplesoft sqr process via unix command line. I would say it is pretty much straight forward as i’ve made many unix interface script to trigger sqr process. Although i must admit sometimes it is confusing given that PeopleSoft allows execution of sqr via the sqr command and the wrapper pssqr. Below are some comparison on how to use both command to execute your sqrs:
pssqr Command Line Parameters
pssqr -CT <databasetype> -CD <databasename> -CA <AccessID> -CAP <AccessPwd> -RP <sqrprogramname> -R <runcontrolid> -I <process instance> -OP <outputdest> -CO <userid> -OT <outputtype> -OF <outputformat>
sqr command line parameters
sqr <sqrdir>/<sqrfile> <dbuser>/<dbpassword>@<database> “-o<outputdir>/<outputfile>” -xb -xi -zif<sqrdir>/pssqr.unx <database> <process instance> <userid> <runcontrolid>
are you all set once you pass the listed parameters? not yet! there are environment variables which needs to be set before you can actually run the binary, this are:
TUXCONFIG(for sqr command line only)
now you should be able to execute the sqr via command line. It is important to keep in mind the number characters that is allowed in the unix command line, the best practice is to use variables or file that can be used by the command. You can use the below example
echo “-CT ORACLE -CD DEMO -CA PS -CAP PS -RP PSSQR -R PSSQRRNCTL -I 1 -OP /tmp/PSSQR.lis -CO VP1 -OT 6 -OF 2 -LG ENG “>/tmp/mysqr.cmd
$PROG $DB “-o/tmp/PSSQR.out” -xb -xi “-i/PSSQR.lis/,$PS_SERVER_CFG/sqr/,” -zif$PS_SERVER_CFG/sqr/pssqr.unx “-f/tmp/PSSQRRNCTL.lis” DEMO 1 VP1 PSSQRRNCTL
1. ensure that security related tables contains relevant data, during the setup after upgrading the environment. it was identified that object permission were empty in the permission list.This can be resolved by copying from existing environment or demo database the following tables:
2. create new permission list, the fastest way to do this is to perform a clone of existing permission list e.g. ALLPAGES
3. open the newly cloned permission list, click peopletools tab and ensure that application designer access is ticked
4. under application designer, select definition permission and click the read only(all) button
5. under application designer, select tools permission and click read only(all) button
6. save the updated permission list
7. create new role, same as permission list perform clone to existing role
8. open the newly created role, click the permission list tab and add the newly created permission list created in step 6
9. save the newly created role which can be added to user who requires readonly access to application designer.
1. Unzip the .ova file to show the contained .ovf and .vmdk files
2. download the OVFTool to convert the ovf to vmx
ovftool -tt=vmx –lax <ova file> <vmx file>
3. OVFTool complained about unsupported devices(like sound card,etc)
so used the <Item ovf:required=”false”> tag for these devices.
4. On trying to upload the .vmx file using vmware standalone convertor
tool, .vmx was missing some critical references to scsi device and the
named .vmdk file – looked at a different server working .vmx file on
hand and substituted a few lines where needed to get the image to
load. Final edited .vmx file looked like listing at page bottom the
below for our setup.
annotation = “Madrona 4.0 Ubuntu 12.04 Server VM”
config.version = “8”
displayname = “Madrona 4.0 Ubuntu 12.04 Server VM”
ethernet0.addressType = “generated”
ethernet0.connectionType = “bridged”
ethernet0.present = “true”
ethernet0.startConnected = “true”
ethernet0.virtualDev = “e1000”
floppy0.present = “false”
guestos = “ubuntu”
ide1:0.autodetect = “true”
ide1:0.deviceType = “atapi-cdrom”
ide1:0.fileName = “auto detect”
ide1:0.present = “true”
ide1:0.startConnected = “true”
scsi0.present = “TRUE”
scsi0.virtualDev = “lsilogic”
scsi0:0.present = “TRUE”
scsi0:0.fileName = “madrona_virtual_v3-disk1.vmdk”
lsilogic.noDriver = “FALSE”
memsize = “512”
mks.enable3d = “TRUE”
numvcpus = “1”
pciBridge0.present = “TRUE”
pciBridge4.functions = “8”
pciBridge4.present = “TRUE”
pciBridge4.virtualDev = “pcieRootPort”
pciBridge5.functions = “8”
pciBridge5.present = “TRUE”
pciBridge5.virtualDev = “pcieRootPort”
pciBridge6.functions = “8”
pciBridge6.present = “TRUE”
pciBridge6.virtualDev = “pcieRootPort”
pciBridge7.functions = “8”
pciBridge7.present = “TRUE”
pciBridge7.virtualDev = “pcieRootPort”
svga.autodetect = “true”
usb.present = “true”
virtualhw.version = “7”
vmci0.present = “TRUE”
For times when you need to login to your webserver console but forgot the passwords, before doing a full install below are the steps to perform password reset
1. Backup the security directory
tar -cvf security.tar security
2. Delete the authenticator file
rm -rf DefaultAuthenticatorInit.ldift
3. Set environment variables
4. Reset the password
java weblogic.security.utils.AdminAccount operator password //webserv//security
5. Update the boot.properties file with the new password
6. Cleanup ldap directory
rm -rf //webserv//servers/PIA/data/ldap/*
7. Bounce your weblogic instance.