Oracle RDS has limitations to how it supports different file workflows
- External tables are apparently not supported
- It is, to my knowledge not possible to mount a disk and use UTL_FILE to read/write to it
There are documents describing procedures for data dump and migration, but the file transfer to and from the RDS instance seem very cumbersome. We have a process that is best described as semi-interactive, so the users are used to being able to mount the data directory over Samba or NFS, and see the results without significant delays.
The question is, what would be the best way to achieve this in Amazon AWS? We’re considering and planning to do some light PoCs with the following:
- Write some PL/SQL to sync contents in eg. DATA_PUMP_DIR to some EC2-mounted volume
- Using AWS Data Pipeline to read and write files to S3, as described here
- Using some smaller Oracle installation on an EC2 instance to run operations over DB links
- Write custom programs/scripts to run the operations over JDBC or similar and execute them over Lambda, or on some fixed instance
- ???
Has anyone else dealt with this issue, and built a similar workflow? What is the best way to achieve the results, or should we take some completely different approach here?