Skip to content

feat[file]: add s3 functionality #659

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Dec 19, 2024
Merged

Conversation

m-mattia-m
Copy link
Contributor

  • Configure S3 endpoint, bucket, base path, credentials
  • Upload file to S3
  • Get file from S3
  • delete file from S3

PS: Please bear in mind that I've never used NestJS before, so I'm not sure if everything is best practice, but it works. Don't hesitate to tell me what I should change or update.

@stonith404
Copy link
Owner

Thank you @m-mattia-m :) I’ll try to review the PR as soon as I can. Since I’ll be on vacation next week and have a lot going on, it may take me some time to review and test it fully.

Copy link
Owner

@stonith404 stonith404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for my late response, I have much going on.

I noticed an issue with how chunked file uploads are being handled. Currently, only the first chunk of a file is successfully uploaded to S3. Pingvin Share relies on chunked uploads, where the frontend splits a file into multiple parts and uploads each part sequentially.

With your current implementation, only the initial chunk is uploaded. You can verify this by trying to upload a file larger than the chunk size (default is 10 MB).

@stonith404
Copy link
Owner

Additionally, I would suggest to create a separate S3 file service as the methods in the file service get a bit too complicated when both S3 and local file storage are handled.

@m-mattia-m
Copy link
Contributor Author

No worries :)

I can see what you mean by "a bit too complicated" :D. I will create a separate service for the S3 functionality. Yes you are right, large files are failing during the upload. I've never tested it with large files which I should have done. I will implement this feature in a few weeks but I have also much going on :).

@stonith404
Copy link
Owner

@m-mattia-m Sure, take your time :)

@m-mattia-m
Copy link
Contributor Author

m-mattia-m commented Nov 29, 2024

@stonith404 FYI: I fixed the chunking problem and split the file service into a local and a S3 service. Consider that I need to refactor these changes, but I will do that in a few days.
If you have time, I would appreciate it when you could test the changes if you can find any issues. :)

@m-mattia-m
Copy link
Contributor Author

@stonith404 I updated the code a bit, and I hope it's fine. Don't hesitate to tell me what I should change or update. :)

Copy link
Owner

@stonith404 stonith404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking the time to implement my feedback :)

Most things are now working as expected, but I’ve noticed a few remaining issues. Please take a look at my additional comments.

@m-mattia-m
Copy link
Contributor Author

m-mattia-m commented Dec 8, 2024

I noticed two things that I will change/implement in a few weeks when I have more time.

  • When you try to open the preview for a PDF that is larger than 10Mb (chunk size), an error occurs (it downloads the file (without the correct name and type extension) instead of displaying the content in the PDF viewer).
  • If you change the storage provider during runtime, the backend searches in the new location where it can't find these files that were uploaded before the storage change, so I will add a location attribute in the database: s3, local to each file.

@stonith404
Copy link
Owner

Thanks for the changes!

Sure, take your time. I think it would be sufficient to add the provider information to the share, instead of adding it to every file.

When these two issues are fixed it should finally be ready to merge :)

@m-mattia-m
Copy link
Contributor Author

I fixed the two issues I mentioned before. Now everything should work :).

Copy link
Owner

@stonith404 stonith404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made some minor changes like formatting the code, adding the Prisma migration and removing storageProvider from the dto as the client doesn't need this property.

For me it looks good now. Should I merge it?

@m-mattia-m
Copy link
Contributor Author

m-mattia-m commented Dec 18, 2024

Thank you and that would be great. :)

@stonith404
Copy link
Owner

Great, thanks again for your contribution!

@stonith404 stonith404 merged commit 5a54fe4 into stonith404:main Dec 19, 2024
@GitNees
Copy link
Contributor

GitNees commented Dec 24, 2024

hi @m-mattia-m I just filed a bug report as I can use S3 buckets and upload, but impossible to delete the files from the shares.

@m-mattia-m
Copy link
Contributor Author

m-mattia-m commented Dec 24, 2024

@m-mattia-m m-mattia-m deleted the feat-s3 branch December 24, 2024 09:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants