-
-
Notifications
You must be signed in to change notification settings - Fork 309
feat[file]: add s3 functionality #659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fix issue with archiving -> disable archiving for s3
Thank you @m-mattia-m :) I’ll try to review the PR as soon as I can. Since I’ll be on vacation next week and have a lot going on, it may take me some time to review and test it fully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for my late response, I have much going on.
I noticed an issue with how chunked file uploads are being handled. Currently, only the first chunk of a file is successfully uploaded to S3. Pingvin Share relies on chunked uploads, where the frontend splits a file into multiple parts and uploads each part sequentially.
With your current implementation, only the initial chunk is uploaded. You can verify this by trying to upload a file larger than the chunk size (default is 10 MB).
Additionally, I would suggest to create a separate S3 file service as the methods in the file service get a bit too complicated when both S3 and local file storage are handled. |
No worries :) I can see what you mean by "a bit too complicated" :D. I will create a separate service for the S3 functionality. Yes you are right, large files are failing during the upload. I've never tested it with large files which I should have done. I will implement this feature in a few weeks but I have also much going on :). |
@m-mattia-m Sure, take your time :) |
# Conflicts: # backend/src/file/file.service.ts # frontend/src/i18n/translations/de-DE.ts
@stonith404 FYI: I fixed the chunking problem and split the file service into a local and a S3 service. Consider that I need to refactor these changes, but I will do that in a few days. |
@stonith404 I updated the code a bit, and I hope it's fine. Don't hesitate to tell me what I should change or update. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking the time to implement my feedback :)
Most things are now working as expected, but I’ve noticed a few remaining issues. Please take a look at my additional comments.
frontend/src/components/admin/configuration/ConfigurationNavBar.tsx
Outdated
Show resolved
Hide resolved
Co-authored-by: Elias Schneider <[email protected]>
…r.tsx Co-authored-by: Elias Schneider <[email protected]>
Co-authored-by: Elias Schneider <[email protected]>
Co-authored-by: Elias Schneider <[email protected]>
I noticed two things that I will change/implement in a few weeks when I have more time.
|
Thanks for the changes! Sure, take your time. I think it would be sufficient to add the provider information to the share, instead of adding it to every file. When these two issues are fixed it should finally be ready to merge :) |
I fixed the two issues I mentioned before. Now everything should work :). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made some minor changes like formatting the code, adding the Prisma migration and removing storageProvider
from the dto as the client doesn't need this property.
For me it looks good now. Should I merge it?
Thank you and that would be great. :) |
Great, thanks again for your contribution! |
hi @m-mattia-m I just filed a bug report as I can use S3 buckets and upload, but impossible to delete the files from the shares. |
For anyone's information: Bug Report: S3 shares are not deleted on S3 compatible buckets #705 |
PS: Please bear in mind that I've never used NestJS before, so I'm not sure if everything is best practice, but it works. Don't hesitate to tell me what I should change or update.