Uploading parquets from S3 with IAM role instead of user credentials

Hello,
We,re trying to setup a TigerGraph instance on Amazon EC2 instance for loading parquet files from s3 buckets.
We have no users for accessing s3 resources (privacy policy), so we have no such permanent credentials ACCESS_ID and SECRET_KEY. We managed to get credentials from IAM role, but still it also has SESSION_TOKEN. Actually, EC2 instance has all the necessary permissions to access s3 at a role level.
If run loading job with no credentials data_source, job fails with Internal error (NullPointerExceptions in the logs):

I@20210709 08:19:00.425 tigergraph|127.0.0.1:37656|00000000009 (LoadingJobRunTimeConfig.java:108) The created s3 job ID: cookie_data_store.parquet_load.s3.s1.1625818740425
I@20210709 08:19:00.425 tigergraph|127.0.0.1:37656|00000000009 (S3LoadingJob.java:74) Sending s3 loading job
E@20210709 08:19:00.426 tigergraph|127.0.0.1:37656|00000000009 (QueryBlockHandler.java:202) java.lang.NullPointerException
java.lang.NullPointerException
        at java.io.StringReader.<init>(StringReader.java:50)
        at org.json.JSONTokener.<init>(JSONTokener.java:94)
        at org.json.JSONObject.<init>(JSONObject.java:406)
        at com.tigergraph.schema.plan.job.S3LoadingJob.sendS3LoadingJob(S3LoadingJob.java:118)
        at com.tigergraph.schema.plan.job.S3LoadingJob.runConcurrentLoadingJob(S3LoadingJob.java:206)
        at com.tigergraph.schema.plan.job.BaseLoadingJob.runLoadingJobs(BaseLoadingJob.java:437)
        at com.tigergraph.schema.plan.job.BaseLoadingJob.runLoadingJobs(BaseLoadingJob.java:385)
        at com.tigergraph.schema.handler.QueryBlockHandler.a(QueryBlockHandler.java:482)
        at com.tigergraph.schema.handler.QueryBlockHandler.a(QueryBlockHandler.java:195)
        at com.tigergraph.schema.handler.CommandHandler.a(CommandHandler.java:90)
        at com.tigergraph.schema.handler.BaseHandler.handle(BaseHandler.java:229)
        at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
        at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:72)
        at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
        at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
        at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
        at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
D@20210709 08:19:00.429 tigergraph|127.0.0.1:37656|00000000009 (Util.java:1592) __GSQL__COOKIES__,{"sessionId":"00000000009","serverId":"1_1625818022132","graph":"cookie_data_store","terminalWidth":180,"compileThread":0,"clientPath":"/home/tigergraph","fromGraphStudio":false,"fromGsqlClient":true,"fromGsqlLeaderServer":false,"clientCommit":"3887cbd1d67b58ba6f88c50a069b679e20743984","sessionParameters":{},"sessionAborted":false,"loadingProgressAborted":false}
E@20210709 08:19:00.429 tigergraph|127.0.0.1:37656|00000000009 (BaseHandler.java:234) java.lang.IllegalMonitorStateException
java.lang.IllegalMonitorStateException
        at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:151)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1261)
        at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:457)
        at com.tigergraph.schema.handler.CommandHandler.a(CommandHandler.java:98)
        at com.tigergraph.schema.handler.BaseHandler.handle(BaseHandler.java:229)
        at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
        at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:72)
        at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
        at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
        at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
        at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
D@20210709 08:19:00.431 tigergraph|127.0.0.1:37656|00000000009 (Util.java:1592) Internal Error, please contact support@tigergraph.com

Setting just access_id and secret_key causes connection fails:

set s1 = "{\"file.reader.settings.fs.s3a.access.key\":\"ASIATIHZY******XWN\",\"file.reader.settings.fs.s3a.secret.key\":\"HEkdXcW8ZtI******5B8zhkin\"}"
Can't connect to S3 using provided credential.
Failed to update data source 's1'.

As far as I know, authentication/authorisation via IAM role is not yet supported. You need to use access key/secret key combo. That way loading from Parquet file works well, as expected.