Compare commits

...

264 commits

Author SHA1 Message Date
Houkime faa4402030 chore(block devices): edit comment to be more correct 2024-09-13 12:31:30 +00:00
Inex Code 6340ad348c chore: Recover fixes destroyed by force push
Please don't do this again
2024-09-13 12:11:56 +00:00
Inex Code 63bcfa3077 chroe: string casing 2024-09-13 12:11:56 +00:00
Inex Code d3e7eb44ea chore: Linting 2024-09-13 12:11:56 +00:00
Houkime 6eca44526a chore(services): clean up the config service 2024-09-13 12:11:56 +00:00
Houkime 408284a69f chore(backup): make a comment into a docstring 2024-09-13 12:11:56 +00:00
Houkime 5ea000baab feature(backups): manual autobackup -> total backup 2024-09-13 12:11:56 +00:00
Houkime ee06d68047 feature(backups): allow non-autobackup slices for full restoration 2024-09-13 12:11:56 +00:00
Houkime 1a9a381753 refactor(backups): handle the case when there is no snapshot to sync date with 2024-09-13 12:11:56 +00:00
Houkime 53c6bc1af7 refactor(backups): cleanup old config service code 2024-09-13 12:11:56 +00:00
Houkime 0d23b91a37 refactor(backups): config service reformat 2024-09-13 12:11:56 +00:00
Houkime 27f09d04de fix(backups): change the dump folder 2024-09-13 12:11:56 +00:00
Houkime b522c72aaf test(jobs): clean jobs properly 2024-09-13 12:11:56 +00:00
Houkime b67777835d fix(backup): make last slice return a correct list 2024-09-13 12:11:56 +00:00
Houkime a5b52c8f75 feature(backup): endpoint to force autobackup 2024-09-13 12:11:56 +00:00
Houkime bb493e6b74 feature(backup): reload snapshots when migrating 2024-09-13 12:11:56 +00:00
Houkime a4a70c07d3 test(backup): migration test 2024-09-13 12:11:56 +00:00
Houkime 427fdbdb49 test(backup): minimal snapshot slice test 2024-09-13 12:11:56 +00:00
Houkime bfb0442e94 feature(backup): query to see restored snapshots in advance 2024-09-13 12:11:56 +00:00
Houkime 5e07a9eaeb feature(backup): error handling for the full restore endpoint 2024-09-13 12:11:56 +00:00
Houkime 7de5d26a81 feature(backup): full restore task 2024-09-13 12:11:56 +00:00
Houkime be4e883b12 feature(backup): autobackup slice detection 2024-09-13 12:11:56 +00:00
Houkime 7ae550fd26 refactor(system): break out rebuild job creation 2024-09-13 12:11:56 +00:00
Houkime f068329153 fix(service manager): debug and test backup hooks 2024-09-13 12:11:56 +00:00
Houkime f8c6a8b9d6 refactor(utils): maybe make fsavail an int? 2024-09-13 12:11:56 +00:00
Houkime af014e8b83 feature(backup): support for perma-active services and services with no existing data 2024-09-13 12:11:56 +00:00
Houkime 0329addd1f feature(services): add perma-active services (api itself) 2024-09-13 12:11:56 +00:00
Houkime 35e2e8cc78 test(dkim): separate dummy dkim into a folder 2024-09-13 12:11:56 +00:00
Houkime c5c6d860fd test(secrets): add a dummy secrets file 2024-09-13 12:11:56 +00:00
Houkime d4998ded46 refactor(services): migrate service management to a special service 2024-09-13 12:11:56 +00:00
Houkime 2ef674a037 refactor(services): PARTIAL migrate get_all_services 2024-09-13 12:11:56 +00:00
Houkime f6151ee451 feature(backup): add migration specific endpoints 2024-09-13 12:11:56 +00:00
Houkime 8c44f78bbb feature(services): add config service 2024-09-13 12:11:56 +00:00
Houkime f57eda5237 feature(services): allow moving uninitialized services 2024-09-13 12:11:56 +00:00
dettlaff 6afaefbb41 tests: fix nix_collect_garbage 2024-09-12 16:09:30 +04:00
Inex Code e6b7a1c168 style: linting 2024-09-11 13:58:48 +03:00
Houkime 68d0ee8c5d test(system): dns migration 2024-09-11 13:58:48 +03:00
Houkime 77fb99d84e feature(system): dns migration 2024-09-11 13:58:48 +03:00
dettlaff ac07090784 style: blacked 2024-09-05 15:57:27 +04:00
def 81d082ff2a fix: nix collect garbage 2024-09-05 14:54:58 +03:00
Houkime 8ef63eb90e fix(backups): cover the case when service fails to stop 2024-08-16 15:36:22 +03:00
dettlaff 391e4802b2 tests: add tests for monitoring (#140)
Co-authored-by: nhnn <nhnn@disroot.org>
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/140
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-08-16 15:36:07 +03:00
Houkime 55bbb0f3cc test(services): add more debug to the dummy service 2024-08-16 14:14:56 +03:00
Inex Code 1d31a29dce chore: Add bandit to dev shell 2024-08-12 21:53:44 +03:00
dettlaff bbd909a544 feat: timeout for monitoring 2024-08-12 21:45:21 +03:00
Houkime 3c3b0f6be0 fix(backups): allow retrying when deleting service files 2024-08-12 19:45:51 +03:00
nhnn 1bfe7cf8dc fix: stop prosody when jitsi stops 2024-08-09 11:17:27 +03:00
dettlaff 4cd90d0c93 feat: add Prometheus monitoring (#120)
Co-authored-by: nhnn <nhnn@disroot.org>
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/120
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-07-30 16:55:57 +03:00
Inex Code 1259c081ef style: Reformat with new Black version 2024-07-26 22:59:44 +03:00
Inex Code 659cfca8a3 chore: Migrate to NixOS 24.05 2024-07-26 22:59:32 +03:00
Inex Code 9b93107b36 feat: Service configuration (#127)
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/127
2024-07-26 18:33:04 +03:00
Inex Code 40b8eb06d0 Merge pull request 'feat: add option to filter logs by unit or slice' (#128) from nhnn/selfprivacy-rest-api:logs-filtering into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/128
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-07-26 16:33:05 +03:00
nhnn 3c024cb613 feat: add option to filter logs by unit or slice 2024-07-25 20:34:28 +03:00
Alexander Tomokhov a00aae1bee fix: remove '-v' in pytest-vm 2024-07-15 17:00:26 +03:00
Inex Code b510af725b Merge pull request 'feat: add roundcube service' (#119) from def/selfprivacy-rest-api:master into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/119
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-07-15 16:45:46 +03:00
Inex Code d18d644cec Merge remote-tracking branch 'origin/master' into roundcube 2024-07-15 17:30:59 +04:00
Inex Code 16d1f9f21a Merge pull request 'feat: graphql endpoint to fetch system logs' (#116) from nhnn/selfprivacy-rest-api:api-logs into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/116
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-07-15 16:23:30 +03:00
Inex Code d8fe54e0e9 fix: do not use bare 'except' 2024-07-15 17:05:38 +04:00
Inex Code 5c5e098bab style: do not break line before logic operator 2024-07-15 17:02:34 +04:00
Inex Code cc4b411657 refactor: Replace strawberry.types.Info with just Info 2024-07-15 16:59:27 +04:00
nhnn 94b0276f74 fix: extract business logic to utils/systemd_journal.py 2024-07-13 11:58:54 +03:00
Inex Code c857678c9a docs: Update Contributing file 2024-07-11 20:20:08 +04:00
Inex Code 859ac4dbc6 chore: Update nixpkgs 2024-07-11 19:08:04 +04:00
Inex Code 4ca9b9f54e fix: Wait for ws logs test to init 2024-07-10 21:46:14 +04:00
Inex Code faa8952e9c chore: Bump version to 3.3.0 2024-07-10 19:51:10 +04:00
Inex Code 5f3fc0d96e chore: formatting 2024-07-10 19:18:22 +04:00
Inex Code 9f5f0507e3 Merge remote-tracking branch 'origin/master' into api-logs 2024-07-10 18:52:10 +04:00
Inex Code ceee6e4db9 fix: Read auth token from the connection initialization payload
Websockets do not provide headers, and sending a token as a query param is also not good (it gets into server's logs),
As an alternative, we can provide a token in the first ws payload.

Read more: https://strawberry.rocks/docs/general/subscriptions#authenticating-subscriptions
2024-07-05 18:14:18 +04:00
Inex Code a7be03a6d3 refactor: Remove setting KEA
This is already done via NixOS config
2024-07-04 18:49:17 +04:00
Houkime 9accf861c5 fix(websockets): add websockets dep so that uvicorn works 2024-07-04 17:19:25 +03:00
Houkime 41f6d8b6d2 test(websocket): remove some duplication 2024-07-04 17:19:25 +03:00
Houkime 57378a7940 test(websocket): remove excessive sleeping 2024-07-04 17:19:25 +03:00
Houkime 05ffa036b3 refactor(jobs): offload job subscription logic to a separate file 2024-07-04 17:19:25 +03:00
Houkime ccf71078b8 feature(websocket): add auth to counter too 2024-07-04 17:19:25 +03:00
Houkime cb641e4f37 feature(websocket): add auth 2024-07-04 17:19:25 +03:00
Houkime 0fda29cdd7 test(devices): provide devices for a service test to fix conditional test fail. 2024-07-04 17:19:25 +03:00
Houkime 442538ee43 feature(jobs): UNSAFE endpoint to get job updates 2024-07-04 17:19:25 +03:00
Houkime 51ccde8b07 test(jobs): test simple counting 2024-07-04 17:19:25 +03:00
Houkime cbe5c56270 chore(jobs): shorter typehints and import sorting 2024-07-04 17:19:25 +03:00
Houkime ed777e3ebf feature(jobs): add subscription endpoint 2024-07-04 17:19:25 +03:00
Houkime f14866bdbc test(websocket): separate ping and init 2024-07-04 17:19:25 +03:00
Houkime a2a4b461e7 test(websocket): ping pong test 2024-07-04 17:19:25 +03:00
Houkime 9add0b1dc1 test(websocket) test connection init 2024-07-04 17:19:25 +03:00
Houkime 00c42d9660 test(jobs): subscription query generating function 2024-07-04 17:19:25 +03:00
Houkime 2d9f48650e test(jobs) test API job format 2024-07-04 17:19:25 +03:00
Houkime c4aa757ca4 test(jobs): test Graphql job getting 2024-07-04 17:19:25 +03:00
Houkime 63d2e48a98 feature(jobs): websocket connection 2024-07-04 17:19:25 +03:00
Houkime 9bfffcd820 feature(jobs): job update generator 2024-07-04 17:19:25 +03:00
Houkime 6510d4cac6 feature(redis): enable key space notifications by default 2024-07-04 17:19:25 +03:00
Houkime fff8a49992 refactoring(jobs): break out a function returning all jobs 2024-07-04 17:19:25 +03:00
Houkime 5558577927 test(redis): test key event notifications 2024-07-04 17:19:25 +03:00
Houkime f08dc3ad23 test(async): pubsub 2024-07-04 17:19:25 +03:00
Houkime 94386fc53d chore(nixos): add pytest-asyncio 2024-07-04 17:19:25 +03:00
Houkime b6118465a0 feature(redis): async connections 2024-07-04 17:19:25 +03:00
Inex Code 4066be38ec chore: Bump version to 3.2.2 2024-07-01 19:25:54 +04:00
Inex Code 7522c2d796 refactor: Change gitea to Forgejo 2024-06-30 23:02:07 +04:00
Inex Code 6e0bf4f2a3 chore: PR cleanup 2024-06-27 17:43:13 +03:00
Inex Code c42e2ef3ac Revert "feat: move get_subdomain to parent class really"
This reverts commit 4eaefc8321.
2024-06-27 17:43:13 +03:00
Inex Code 8bb9166287 Revert "fix: remove get sub domain from services"
This reverts commit 46fd7a237c.
2024-06-27 17:43:13 +03:00
Inex Code 306b7f898d Revert "feat: rewrite get_url()"
This reverts commit f834c85401.
2024-06-27 17:43:13 +03:00
nhnn f1cc84b8c8 fix: add migrations to migration list in migrations/__init__.py 2024-06-27 17:43:13 +03:00
dettlaff 02bc74f4c4 fix: only roundcube migration, other services removed 2024-06-27 17:43:13 +03:00
dettlaff 416a0a8725 fix: from review 2024-06-27 17:43:13 +03:00
dettlaff 82a0b557e1 feat: add migration for userdata 2024-06-27 17:43:13 +03:00
dettlaff 7b9420c244 feat: rewrite get_url() 2024-06-27 17:43:13 +03:00
dettlaff 9125d03b35 fix: remove get sub domain from services 2024-06-27 17:43:13 +03:00
dettlaff 2b9b81890b feat: move get_subdomain to parent class really 2024-06-27 17:43:13 +03:00
dettlaff 78dec5c347 feat: move get_subdomain to parent class 2024-06-27 17:43:13 +03:00
dettlaff 4d898f4ee8 feat: add migration for services flake 2024-06-27 17:43:13 +03:00
dettlaff 31feeb211d fix: change roundcube to webmail 2024-06-27 17:43:13 +03:00
dettlaff a00c4d4268 fix: change return get_folders 2024-06-27 17:43:13 +03:00
dettlaff 9c50f8bba7 fix from review 2024-06-27 17:43:13 +03:00
dettlaff 1b91168d06 style: fix imports 2024-06-27 17:43:13 +03:00
dettlaff 4823491e3e feat: add roundcube service 2024-06-27 17:43:13 +03:00
Maxim Leshchenko 5602c96056 feat(services): rename "sda1" to "system disk" and etc (#122)
Closes #51

Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/122
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: Maxim Leshchenko <cnmaks90@gmail.com>
Co-committed-by: Maxim Leshchenko <cnmaks90@gmail.com>
2024-06-27 17:41:46 +03:00
dettlaff f90eb3fb4c feat: add flake services manager (#113)
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/113
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-by: houkime <houkime@protonmail.com>
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-06-21 23:35:04 +03:00
nhnn 8b2e4666dd fix: rename PageMeta to LogsPageMeta 2024-06-11 12:36:42 +03:00
nhnn 3d2c79ecb1 feat: streaming of journald entries via graphql subscription 2024-06-06 16:07:08 +03:00
nhnn fc2ac0fe6d feat: graphql endpoint to fetch system logs from journald 2024-06-06 16:03:16 +03:00
Houkime cb2a1421bf test(websocket): remove some duplication 2024-05-27 21:30:51 +00:00
Houkime 17ae162156 test(websocket): remove excessive sleeping 2024-05-27 21:30:51 +00:00
Houkime f772005b17 refactor(jobs): offload job subscription logic to a separate file 2024-05-27 21:30:51 +00:00
Houkime 950093a3b1 feature(websocket): add auth to counter too 2024-05-27 21:30:51 +00:00
Houkime 8fd12a1775 feature(websocket): add auth 2024-05-27 21:30:51 +00:00
Houkime 39f584ad5c test(devices): provide devices for a service test to fix conditional test fail. 2024-05-27 21:30:51 +00:00
Houkime 6d2fdab071 feature(jobs): UNSAFE endpoint to get job updates 2024-05-27 21:30:51 +00:00
Houkime 3910e416db test(jobs): test simple counting 2024-05-27 21:30:51 +00:00
Houkime 967e59271f chore(jobs): shorter typehints and import sorting 2024-05-27 21:30:51 +00:00
Houkime 3b0600efb6 feature(jobs): add subscription endpoint 2024-05-27 21:30:51 +00:00
Houkime 8348f11faf test(websocket): separate ping and init 2024-05-27 21:30:51 +00:00
Houkime 02d337c3f0 test(websocket): ping pong test 2024-05-27 21:30:51 +00:00
Houkime c19fa227c9 test(websocket) test connection init 2024-05-27 21:30:51 +00:00
Houkime 098abd5149 test(jobs): subscription query generating function 2024-05-27 21:30:51 +00:00
Houkime 4306c94231 test(jobs) test API job format 2024-05-27 21:30:51 +00:00
Houkime 1fadf0214b test(jobs): test Graphql job getting 2024-05-27 21:30:51 +00:00
Houkime 4b1becb4e2 feature(jobs): websocket connection 2024-05-27 21:30:51 +00:00
Houkime 43980f16ea feature(jobs): job update generator 2024-05-27 21:30:51 +00:00
Houkime b204d4a9b3 feature(redis): enable key space notifications by default 2024-05-27 21:30:51 +00:00
Houkime 8d099c9a22 refactoring(jobs): break out a function returning all jobs 2024-05-27 21:30:51 +00:00
Houkime 5bf5e7462f test(redis): test key event notifications 2024-05-27 21:30:51 +00:00
Houkime 4d60b7264a test(async): pubsub 2024-05-27 21:30:51 +00:00
Houkime 996cde15e1 chore(nixos): add pytest-asyncio 2024-05-27 21:30:51 +00:00
Houkime 862f85b8fd feature(redis): async connections 2024-05-27 21:30:51 +00:00
Inex Code a742e66cc3 feat: Add "OTHER" as a server provider
We should allow manual SelfPrivacy installations on unsupported server providers. The ServerProvider enum is one of the gatekeepers that prevent this and we can change it easily as not much server-side logic rely on this.

The next step would be manual DNS management, but it would be much more involved than just adding the enum value.
2024-05-25 14:12:51 +03:00
Inex Code 4f1d44ce74 chore: Bump version to 3.2.1 2024-05-24 22:53:58 +03:00
Houkime 8e8e76a954 fix(backups): fix orphaned snapshots erroring out 2024-05-24 12:30:27 +00:00
Inex Code 5a100ec33a chore: Bump version to 3.2.0 2024-05-22 10:57:59 +03:00
Inex Code 524adaa8bc add nix-collect-garbage endpoint (#112)
Continuation of the broken #21

Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-authored-by: def <dettlaff@riseup.net>
Co-authored-by: Houkime <>
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/112
Reviewed-by: houkime <houkime@protonmail.com>
2024-05-01 16:10:39 +03:00
houkime 5e93e6499f Merge pull request 'redis-huey' (#84) from redis-huey into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/84
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-20 14:19:07 +02:00
houkime 3302fe2818 Merge pull request 'Censor out secret keys from backup error messages' (#108) from censor-errors into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/108
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-20 14:18:39 +02:00
Houkime 9ee72c1fcb test(huey): make timeout more so that vm gets it in time 2024-03-20 09:02:10 +00:00
Houkime 28556bd22d test(backups): move errored job checker into common test utils 2024-03-18 17:40:48 +00:00
Houkime c5b227226c fix(backups): do not rely on obscure behaviour 2024-03-18 17:33:45 +00:00
Inex Code 5ec677339b Merge pull request 'docs(api): add a CI badge' (#107) from ci-badge into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/107
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-18 19:28:31 +02:00
Houkime f2446dcee2 docs(api): add missing dollar sign 2024-03-18 19:28:20 +02:00
Houkime 97960f77f2 docs(api): use title case in README 2024-03-18 19:28:20 +02:00
Houkime 677ed27773 docs(api): add a CI badge 2024-03-18 19:28:20 +02:00
Houkime b40df670f8 fix(backups): censor out keys from error messages
We do not have any automated sending of errors to Selfprivacy
but it was inconvenient for people who want to send a
screenshot of their error.
2024-03-18 17:15:40 +00:00
Houkime b36701e31c style(api): enable pydantic support in mypy 2024-03-18 17:11:27 +00:00
Houkime b39558ea1f fix(backups): report error in the error field of the job 2024-03-18 17:00:55 +00:00
Houkime 6f38b2309f fix(huey): adapt to new VM test environment 2024-03-18 12:18:55 +00:00
Houkime baf7843349 test(huey): only import test task if it is a test 2024-03-18 12:18:55 +00:00
Houkime 8e48a5ad5f test(huey): add a scheduling test (expected-fails for now) 2024-03-18 12:18:55 +00:00
Houkime fde461b4b9 test(huey): test that redis socket connection works 2024-03-18 12:18:55 +00:00
Houkime 9954737791 use kill() instead of terminate in huey tests 2024-03-18 12:18:55 +00:00
Houkime 2b19633cbd test(huey): break out preparing the environment vars
I did it for testing redis socketing too, but I guess this will wait for
another time. Somehow it worked even without an actual redis socket and it was
creepy. Idk yet how one can best make redis to make sockets at arbitrary
temporary dirs without starting another redis.
2024-03-18 12:18:55 +00:00
Houkime 83592b7bf4 feature(huey): use RedisHuey 2024-03-18 12:18:55 +00:00
houkime efc6b47cfe Merge pull request 'rebuild-when-moving' (#101) from rebuild-when-moving into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/101
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-18 14:14:08 +02:00
Houkime b2edfe784a refactor(service): add return typing to DNSrecord conversion and comments 2024-03-18 11:44:53 +00:00
Houkime 6e29da4a4f test(service): test moving with rebuilding via fp 2024-03-18 11:32:02 +00:00
Houkime 12b2153b7c test(service): do not call bash needlessly (it screwed up with fp) 2024-03-18 11:32:02 +00:00
Houkime 8c8c9a51cc refactor(service): visually break down the move function a bit 2024-03-18 11:32:02 +00:00
Houkime fed5735b24 refactor(service): break out DNS records into a separate resolver field 2024-03-18 11:32:02 +00:00
Houkime b257d7f39e fix(service): FAILING TESTS, rebuild when moving 2024-03-18 11:32:02 +00:00
Houkime 70a0287794 refactor(service): move finishing the job out of moving function 2024-03-18 11:32:02 +00:00
Houkime 534d965cab refactor(service): break out sync rebuilding 2024-03-18 11:32:02 +00:00
Houkime f333e791e1 refactor(service): break out ServiceStatus and ServiceDNSRecord 2024-03-18 11:32:02 +00:00
houkime 962e8d5ca7 Merge pull request 'CI: run pytest and coverage tests inside ephemeral VM in the "builder" VM (nested)' (#103) from ci-vm-for-pytest into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/103
Reviewed-by: houkime <houkime@protonmail.com>
2024-03-18 12:07:54 +02:00
Alexander Tomokhov 5e29816c84 ci: delete USE_REDIS_PORT environment variable 2024-03-16 00:18:01 +04:00
Alexander Tomokhov 53ec774c90 flake: VM test: remove Redis service port number setting 2024-03-15 16:23:21 +04:00
Inex Code bda21b7507 fix: Mark md5 as not used for security 2024-03-15 16:14:31 +04:00
Inex Code 2d5ac51c06 fix: future mock are now more in the future 2024-03-15 16:14:31 +04:00
Alexander Tomokhov 61b9a00cea ci: run pytest and coverage as part of nix flake check in VM 2024-03-15 16:14:31 +04:00
houkime edcc7860e4 Merge pull request 'chore(api): update nixpkgs version and add a script to do it' (#104) from update-nixpkgs into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/104
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-15 13:07:08 +02:00
Houkime 64da8503dd chore(api): update nixpkgs version and add a script to do it 2024-03-15 11:01:34 +00:00
houkime d464f3b82d Merge pull request 'flake VM: add additional /dev/vdb disk with empty ext4 FS' (#102) from vm-disk into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/102
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-by: houkime <houkime@protonmail.com>
2024-03-15 11:42:37 +02:00
Alexander Tomokhov bddc6d1831 flake: VM: add one more disk (/dev/vdc) volume with empty ext4 FS 2024-03-14 07:07:23 +04:00
Alexander Tomokhov 5d01c25f3b flake: VM: add additional disk with empty ext4 FS 2024-03-08 14:43:31 +04:00
Alexander Tomokhov 69774ba186 flake: small optimization: mkShell => mkShellNoCC 2024-03-08 14:43:31 +04:00
Inex Code 1f1fcc223b fix: division by zero 2024-03-07 23:29:37 +03:00
Inex Code a543f6da2a chore: Bump version to 3.1.0 2024-03-07 23:12:45 +03:00
Inex Code cf2f153cfe Merge pull request 'feat: Basic tracking of the NixOS rebuilds' (#98) from system-rebuild-tracking into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/98
Reviewed-by: houkime <houkime@protonmail.com>
2024-03-06 18:12:21 +02:00
Inex Code 0eff0ef735 fix: move_service task path 2024-03-06 18:43:55 +03:00
Houkime 7dae81530e test(services): clean up tests 2024-03-06 18:40:05 +03:00
Houkime fd43a6ccf1 doc(services): explain the Owned Path reason d'etre after trying to remove it 2024-03-06 18:40:05 +03:00
Houkime eeef2891c9 fix(services): fix merge bug 2024-03-06 18:40:05 +03:00
Houkime 3f9d2b2481 refactor(services): remove too many imports and cleanup 2024-03-06 18:40:05 +03:00
Houkime 305e5cc2c3 refactor(services): introduce Bind class and test moving deeper 2024-03-06 18:40:05 +03:00
Houkime 1e51f51844 feature(backups): intermittent commit for binds, to be replaced 2024-03-06 18:40:05 +03:00
Houkime 235c59b556 refactor(services): break out location construction when moving 2024-03-06 18:40:05 +03:00
Houkime ddca1b0cde refactor(services): fix type annotation 2024-03-06 18:40:05 +03:00
Houkime c22802f693 fix(services): check for possible None progress when moving folders 2024-03-06 18:40:05 +03:00
Houkime 17a1e34c0d feature(services): check before moving task and before move itself 2024-03-06 18:40:05 +03:00
Houkime d7ef2ed09a refactor(services): make moving a part of generic service functionality 2024-03-06 18:39:27 +03:00
Houkime 7fd09982a4 fix(services): a better error message 2024-03-06 18:39:27 +03:00
Houkime b054235d96 test(services): remove unused json 2024-03-06 18:39:27 +03:00
Houkime 2519a50aac test(services): merge def and current service tests 2024-03-06 18:39:27 +03:00
Houkime d34db3d661 fix(services): report moving errors fully 2024-03-06 18:39:27 +03:00
Houkime 28fdf8fb49 refactor(service_mover): decompose the giant move_service 2024-03-06 18:39:27 +03:00
def 18327ffa85 test: remove unused mocks, fix tests naming 2024-03-06 18:39:27 +03:00
def b5183948af fix: service tests 2024-03-06 18:39:27 +03:00
def e01b8ed8f0 add test_api_services.py 2024-03-06 18:39:27 +03:00
def 5cd1e28632 add storage tests 2024-03-06 18:39:27 +03:00
Inex Code f895f2a38b refactor: Return last 10 log lines when system rebuild failed 2024-03-06 18:33:55 +03:00
Inex Code 8a607b9609 Merge pull request 'def_tests_reworked' (#88) from def_tests_reworked into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/88
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-05 16:40:15 +02:00
Inex Code c733cfeb9e Merge remote-tracking branch 'origin/system-rebuild-tracking' into system-rebuild-tracking 2024-03-05 14:41:43 +03:00
Inex Code 71433da424 refactor: move systemd functions to utils 2024-03-05 11:55:52 +03:00
Houkime ee7c41e0c2 test(services): clean up tests 2024-03-04 17:37:26 +00:00
Houkime 1bed9d87ca doc(services): explain the Owned Path reason d'etre after trying to remove it 2024-03-04 17:16:08 +00:00
Houkime 2c1c783b5e fix(services): fix merge bug 2024-03-04 14:26:26 +00:00
Houkime 8402f66a33 refactor(services): remove too many imports and cleanup 2024-03-04 14:12:44 +00:00
Houkime 1599f601a2 refactor(services): introduce Bind class and test moving deeper 2024-03-04 14:12:44 +00:00
Houkime 0068272382 feature(backups): intermittent commit for binds, to be replaced 2024-03-04 14:12:43 +00:00
Houkime 18934a53e6 refactor(services): break out location construction when moving 2024-03-04 14:12:43 +00:00
Houkime baaf3299ce refactor(services): fix type annotation 2024-03-04 14:12:43 +00:00
Houkime f059c83b57 fix(services): check for possible None progress when moving folders 2024-03-04 14:12:43 +00:00
Houkime fb41c092f1 feature(services): check before moving task and before move itself 2024-03-04 14:12:37 +00:00
Houkime c947922a5d refactor(services): make moving a part of generic service functionality 2024-03-04 13:30:03 +00:00
Houkime b22dfc0469 fix(services): a better error message 2024-03-04 13:30:03 +00:00
Houkime b3c7e2fa9e test(services): remove unused json 2024-03-04 13:30:03 +00:00
Houkime 6cd1d27902 test(services): merge def and current service tests 2024-03-04 13:30:03 +00:00
Houkime e42da357fb fix(services): report moving errors fully 2024-03-04 13:30:03 +00:00
Houkime 2863dd9763 refactor(service_mover): decompose the giant move_service 2024-03-04 13:30:03 +00:00
def 0309e6b76e test: remove unused mocks, fix tests naming 2024-03-04 13:30:03 +00:00
def f4739d4539 fix: service tests 2024-03-04 13:30:03 +00:00
def 20c089154d add test_api_services.py 2024-03-04 13:30:03 +00:00
def e703206e9d add storage tests 2024-03-04 13:30:03 +00:00
Inex Code 96f8aad146 Merge branch 'master' into system-rebuild-tracking 2024-03-04 10:54:43 +02:00
Inex Code 0e94590420 Merge pull request 'simplify autobackups tasking to avoid deadlocks' (#97) from fix-autobackup-typing into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/97
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-03-03 23:46:15 +02:00
Inex Code 36d026a8ca style: Formatting 2024-03-04 00:45:45 +03:00
Inex Code 8cb812be56 chore: Remove debug leftover 2024-03-03 12:00:07 +03:00
Houkime 7ccf495958 refactor(backups): remove excessive format-strings 2024-03-01 13:59:43 +00:00
Houkime f840a6e204 feature(devshell): add pyflakes to catch missing imports 2024-03-01 13:55:02 +00:00
Houkime f5d7666614 refactor(backups): remove excessive imports 2024-03-01 13:54:10 +00:00
Houkime 76f5b57c86 refactor(jobs): add explicit return statements 2024-03-01 12:44:08 +00:00
Houkime bf33fff20d fix(backups): finish the autobackup job 2024-03-01 12:44:08 +00:00
Houkime 742bb239e7 fix(backups): simplify autobackups to avoid deadlocks 2024-03-01 12:44:08 +00:00
Inex Code e16f4499f8 Merge pull request 'fix(dns): Ignore link-local IPv6 address' (#99) from inex/fix-linklocal-ipv6 into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/99
2024-03-01 14:13:15 +02:00
Inex Code 5616dbe77a style: rename ip6 addresses variable 2024-03-01 15:06:32 +03:00
Inex Code bbec9d9d33 refactor: use ipaddress library for ip validation 2024-03-01 14:58:28 +03:00
Inex Code a4327fa669 fix(dns): Ignore link-local IPv6 address 2024-03-01 03:21:31 +03:00
Inex Code 2443ae0144 chore: Remove version flavor 2024-02-26 22:51:31 +03:00
Inex Code c63552241c tests: Cover upgrade and rebuild task 2024-02-26 22:49:32 +03:00
Inex Code d8666fa179 Merge commit '4757bedc4ec62d3577fd1f259abbe34ba6dce893' into system-rebuild-tracking 2024-02-26 18:27:54 +03:00
Inex Code 25c691104f fix: non-0 exit status of is-active 2024-02-12 18:58:27 +03:00
Inex Code 1a34558e23 chore: Shorten the output on status_text 2024-02-12 18:54:32 +03:00
Inex Code c851c3d193 chore: more debugging outuput 2024-02-12 18:53:14 +03:00
Inex Code ad069a2ad2 fix: wrong unit name again 2024-02-12 18:47:37 +03:00
Inex Code b98c020f23 fix: wrong systemd unit used 2024-02-12 18:41:24 +03:00
Inex Code 94456af7d4 fix: debugging 2024-02-12 18:34:55 +03:00
Inex Code ab1ca6e59c fix: register huey task 2024-02-12 18:27:32 +03:00
Inex Code 00bcca0f99 fix: invalid setuptools version 2024-02-12 18:24:54 +03:00
Inex Code 56de00226a chore: Testing env 2024-02-12 18:21:09 +03:00
Inex Code 2019da1e10 feat: Track the status of the nixos rebuild systemd unit 2024-02-12 18:17:18 +03:00
140 changed files with 7638 additions and 1335 deletions

View file

@ -5,18 +5,11 @@ name: default
steps: steps:
- name: Run Tests and Generate Coverage Report - name: Run Tests and Generate Coverage Report
commands: commands:
- kill $(ps aux | grep 'redis-server 127.0.0.1:6389' | awk '{print $2}') || true - nix flake check -L
- redis-server --bind 127.0.0.1 --port 6389 >/dev/null &
# We do not care about persistance on CI
- sleep 10
- redis-cli -h 127.0.0.1 -p 6389 config set stop-writes-on-bgsave-error no
- coverage run -m pytest -q
- coverage xml
- sonar-scanner -Dsonar.projectKey=SelfPrivacy-REST-API -Dsonar.sources=. -Dsonar.host.url=http://analyzer.lan:9000 -Dsonar.login="$SONARQUBE_TOKEN" - sonar-scanner -Dsonar.projectKey=SelfPrivacy-REST-API -Dsonar.sources=. -Dsonar.host.url=http://analyzer.lan:9000 -Dsonar.login="$SONARQUBE_TOKEN"
environment: environment:
SONARQUBE_TOKEN: SONARQUBE_TOKEN:
from_secret: SONARQUBE_TOKEN from_secret: SONARQUBE_TOKEN
USE_REDIS_PORT: 6389
- name: Run Bandit Checks - name: Run Bandit Checks

0
.gitignore vendored Executable file → Normal file
View file

2
.mypy.ini Normal file
View file

@ -0,0 +1,2 @@
[mypy]
plugins = pydantic.mypy

View file

@ -1,7 +1,4 @@
{ {
"python.formatting.provider": "black",
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.testing.pytestArgs": [ "python.testing.pytestArgs": [
"tests" "tests"
], ],
@ -9,4 +6,4 @@
"python.testing.pytestEnabled": true, "python.testing.pytestEnabled": true,
"python.languageServer": "Pylance", "python.languageServer": "Pylance",
"python.analysis.typeCheckingMode": "basic" "python.analysis.typeCheckingMode": "basic"
} }

View file

@ -13,9 +13,9 @@ the [repository](https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api),
For detailed installation information, please review and follow: [link](https://nixos.org/manual/nix/stable/installation/installing-binary.html#installing-a-binary-distribution). For detailed installation information, please review and follow: [link](https://nixos.org/manual/nix/stable/installation/installing-binary.html#installing-a-binary-distribution).
3. **Change directory to the cloned repository and start a nix shell:** 3. **Change directory to the cloned repository and start a nix development shell:**
```cd selfprivacy-rest-api && nix-shell``` ```cd selfprivacy-rest-api && nix develop```
Nix will install all of the necessary packages for development work, all further actions will take place only within nix-shell. Nix will install all of the necessary packages for development work, all further actions will take place only within nix-shell.
@ -31,7 +31,7 @@ the [repository](https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api),
Copy the path that starts with ```/nix/store/``` and ends with ```env/bin/python``` Copy the path that starts with ```/nix/store/``` and ends with ```env/bin/python```
```/nix/store/???-python3-3.9.??-env/bin/python``` ```/nix/store/???-python3-3.10.??-env/bin/python```
Click on the python version selection in the lower right corner, and replace the path to the interpreter in the project with the one you copied from the terminal. Click on the python version selection in the lower right corner, and replace the path to the interpreter in the project with the one you copied from the terminal.
@ -43,12 +43,13 @@ the [repository](https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api),
## What to do after making changes to the repository? ## What to do after making changes to the repository?
**Run unit tests** using ```pytest .``` **Run unit tests** using ```pytest-vm``` inside of the development shell. This will run all the test inside a virtual machine, which is necessary for the tests to pass successfully.
Make sure that all tests pass successfully and the API works correctly. For convenience, you can use the built-in VScode interface. Make sure that all tests pass successfully and the API works correctly.
How to review the percentage of code coverage? Execute the command: The ```pytest-vm``` command will also print out the coverage of the tests. To export the report to an XML file, use the following command:
```coverage xml```
```coverage run -m pytest && coverage xml && coverage report```
Next, use the recommended extension ```ryanluker.vscode-coverage-gutters```, navigate to one of the test files, and click the "watch" button on the bottom panel of VScode. Next, use the recommended extension ```ryanluker.vscode-coverage-gutters```, navigate to one of the test files, and click the "watch" button on the bottom panel of VScode.

View file

@ -1,6 +1,8 @@
# SelfPrivacy GraphQL API which allows app to control your server # SelfPrivacy GraphQL API which allows app to control your server
## build ![CI status](https://ci.selfprivacy.org/api/badges/SelfPrivacy/selfprivacy-rest-api/status.svg)
## Build
```console ```console
$ nix build $ nix build
@ -8,7 +10,7 @@ $ nix build
In case of successful build, you should get the `./result` symlink to a folder (in `/nix/store`) with build contents. In case of successful build, you should get the `./result` symlink to a folder (in `/nix/store`) with build contents.
## develop ## Develop
```console ```console
$ nix develop $ nix develop
@ -21,10 +23,10 @@ Type "help", "copyright", "credits" or "license" for more information.
If you don't have experimental flakes enabled, you can use the following command: If you don't have experimental flakes enabled, you can use the following command:
```console ```console
nix --extra-experimental-features nix-command --extra-experimental-features flakes develop $ nix --extra-experimental-features nix-command --extra-experimental-features flakes develop
``` ```
## testing ## Testing
Run the test suite by running coverage with pytest inside an ephemeral NixOS VM with redis service enabled: Run the test suite by running coverage with pytest inside an ephemeral NixOS VM with redis service enabled:
```console ```console
@ -61,7 +63,7 @@ $ TMPDIR=".nixos-vm-tmp-dir" nix run .#checks.x86_64-linux.default.driverInterac
Option `-L`/`--print-build-logs` is optional for all nix commands. It tells nix to print each log line one after another instead of overwriting a single one. Option `-L`/`--print-build-logs` is optional for all nix commands. It tells nix to print each log line one after another instead of overwriting a single one.
## dependencies and dependant modules ## Dependencies and Dependant Modules
This flake depends on a single Nix flake input - nixpkgs repository. nixpkgs repository is used for all software packages used to build, run API service, tests, etc. This flake depends on a single Nix flake input - nixpkgs repository. nixpkgs repository is used for all software packages used to build, run API service, tests, etc.
@ -85,6 +87,6 @@ $ nix flake metadata git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nix
Nix code for NixOS service module for API is located in NixOS configuration repository. Nix code for NixOS service module for API is located in NixOS configuration repository.
## troubleshooting ## Troubleshooting
Sometimes commands inside `nix develop` refuse to work properly if the calling shell lacks `LANG` environment variable. Try to set it before entering `nix develop`. Sometimes commands inside `nix develop` refuse to work properly if the calling shell lacks `LANG` environment variable. Try to set it before entering `nix develop`.

View file

@ -14,10 +14,14 @@ pythonPackages.buildPythonPackage rec {
pydantic pydantic
pytz pytz
redis redis
systemd
setuptools setuptools
strawberry-graphql strawberry-graphql
typing-extensions typing-extensions
uvicorn uvicorn
requests
websockets
httpx
]; ];
pythonImportsCheck = [ "selfprivacy_api" ]; pythonImportsCheck = [ "selfprivacy_api" ];
doCheck = false; doCheck = false;

View file

@ -2,11 +2,11 @@
"nodes": { "nodes": {
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1702780907, "lastModified": 1721949857,
"narHash": "sha256-blbrBBXjjZt6OKTcYX1jpe9SRof2P9ZYWPzq22tzXAA=", "narHash": "sha256-DID446r8KsmJhbCzx4el8d9SnPiE8qa6+eEQOJ40vR0=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "1e2e384c5b7c50dbf8e9c441a9e58d85f408b01f", "rev": "a1cc729dcbc31d9b0d11d86dc7436163548a9665",
"type": "github" "type": "github"
}, },
"original": { "original": {

View file

@ -8,7 +8,7 @@
system = "x86_64-linux"; system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system}; pkgs = nixpkgs.legacyPackages.${system};
selfprivacy-graphql-api = pkgs.callPackage ./default.nix { selfprivacy-graphql-api = pkgs.callPackage ./default.nix {
pythonPackages = pkgs.python310Packages; pythonPackages = pkgs.python312Packages;
rev = self.shortRev or self.dirtyShortRev or "dirty"; rev = self.shortRev or self.dirtyShortRev or "dirty";
}; };
python = self.packages.${system}.default.pythonModule; python = self.packages.${system}.default.pythonModule;
@ -19,12 +19,16 @@
pytest pytest
pytest-datadir pytest-datadir
pytest-mock pytest-mock
pytest-subprocess
pytest-asyncio
black black
mypy mypy
pylsp-mypy pylsp-mypy
python-lsp-black python-lsp-black
python-lsp-server python-lsp-server
pyflakes
typer # for strawberry typer # for strawberry
types-redis # for mypy
] ++ strawberry-graphql.optional-dependencies.cli)); ] ++ strawberry-graphql.optional-dependencies.cli));
vmtest-src-dir = "/root/source"; vmtest-src-dir = "/root/source";
@ -36,9 +40,17 @@
black black
nixpkgs-fmt nixpkgs-fmt
[linters]
bandit
CI uses the following command:
bandit -ll -r selfprivacy_api
mypy
pyflakes
[testing in NixOS VM] [testing in NixOS VM]
nixos-test-driver - run an interactive NixOS VM with with all dependencies nixos-test-driver - run an interactive NixOS VM with all dependencies included and 2 disk volumes
pytest-vm - run pytest in an ephemeral NixOS VM with Redis, accepting pytest arguments pytest-vm - run pytest in an ephemeral NixOS VM with Redis, accepting pytest arguments
''; '';
in in
@ -62,7 +74,7 @@
SCRIPT=$(cat <<EOF SCRIPT=$(cat <<EOF
start_all() start_all()
machine.succeed("ln -sf $NIXOS_VM_SHARED_DIR_GUEST -T ${vmtest-src-dir} >&2") machine.succeed("ln -sf $NIXOS_VM_SHARED_DIR_GUEST -T ${vmtest-src-dir} >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage run -m pytest -v $@ >&2") machine.succeed("cd ${vmtest-src-dir} && coverage run -m pytest $@ >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage report >&2") machine.succeed("cd ${vmtest-src-dir} && coverage report >&2")
EOF EOF
) )
@ -76,13 +88,14 @@
}; };
nixosModules.default = nixosModules.default =
import ./nixos/module.nix self.packages.${system}.default; import ./nixos/module.nix self.packages.${system}.default;
devShells.${system}.default = pkgs.mkShell { devShells.${system}.default = pkgs.mkShellNoCC {
name = "SP API dev shell"; name = "SP API dev shell";
packages = with pkgs; [ packages = with pkgs; [
nixpkgs-fmt nixpkgs-fmt
rclone rclone
redis valkey
restic restic
bandit
self.packages.${system}.pytest-vm self.packages.${system}.pytest-vm
# FIXME consider loading this explicitly only after ArchLinux issue is solved # FIXME consider loading this explicitly only after ArchLinux issue is solved
self.checks.x86_64-linux.default.driverInteractive self.checks.x86_64-linux.default.driverInteractive
@ -111,38 +124,48 @@
"black --check ${self.outPath} > $out"; "black --check ${self.outPath} > $out";
default = default =
pkgs.testers.runNixOSTest { pkgs.testers.runNixOSTest {
imports = [{ name = "default";
name = "default"; nodes.machine = { lib, pkgs, ... }: {
nodes.machine = { lib, pkgs, ... }: { # 2 additional disks (1024 MiB and 200 MiB) with empty ext4 FS
imports = [{ virtualisation.emptyDiskImages = [ 1024 200 ];
boot.consoleLogLevel = lib.mkForce 3; virtualisation.fileSystems."/volumes/vdb" = {
documentation.enable = false; autoFormat = true;
services.journald.extraConfig = lib.mkForce ""; device = "/dev/vdb"; # this name is chosen by QEMU, not here
services.redis.servers.sp-api = { fsType = "ext4";
enable = true; noCheck = true;
save = [ ];
port = 6379; # FIXME
settings.notify-keyspace-events = "KEA";
};
environment.systemPackages = with pkgs; [
python-env
# TODO: these can be passed via wrapper script around app
rclone
restic
];
environment.variables.TEST_MODE = "true";
systemd.tmpfiles.settings.src.${vmtest-src-dir}.L.argument =
self.outPath;
}];
}; };
testScript = '' virtualisation.fileSystems."/volumes/vdc" = {
start_all() autoFormat = true;
machine.succeed("cd ${vmtest-src-dir} && coverage run --data-file=/tmp/.coverage -m pytest -p no:cacheprovider -v >&2") device = "/dev/vdc"; # this name is chosen by QEMU, not here
machine.succeed("coverage xml --rcfile=${vmtest-src-dir}/.coveragerc --data-file=/tmp/.coverage >&2") fsType = "ext4";
machine.copy_from_vm("coverage.xml", ".") noCheck = true;
machine.succeed("coverage report >&2") };
''; boot.consoleLogLevel = lib.mkForce 3;
}]; documentation.enable = false;
services.journald.extraConfig = lib.mkForce "";
services.redis.package = pkgs.valkey;
services.redis.servers.sp-api = {
enable = true;
save = [ ];
settings.notify-keyspace-events = "KEA";
};
environment.systemPackages = with pkgs; [
python-env
# TODO: these can be passed via wrapper script around app
rclone
restic
];
environment.variables.TEST_MODE = "true";
systemd.tmpfiles.settings.src.${vmtest-src-dir}.L.argument =
self.outPath;
};
testScript = ''
start_all()
machine.succeed("cd ${vmtest-src-dir} && coverage run --data-file=/tmp/.coverage -m pytest -p no:cacheprovider -v >&2")
machine.succeed("coverage xml --rcfile=${vmtest-src-dir}/.coveragerc --data-file=/tmp/.coverage >&2")
machine.copy_from_vm("coverage.xml", ".")
machine.succeed("coverage report >&2")
'';
}; };
}; };
}; };

View file

@ -61,7 +61,7 @@ in
HOME = "/root"; HOME = "/root";
PYTHONUNBUFFERED = "1"; PYTHONUNBUFFERED = "1";
PYTHONPATH = PYTHONPATH =
pkgs.python310Packages.makePythonPath [ selfprivacy-graphql-api ]; pkgs.python312Packages.makePythonPath [ selfprivacy-graphql-api ];
} // config.networking.proxy.envVars; } // config.networking.proxy.envVars;
path = [ path = [
"/var/" "/var/"
@ -82,7 +82,7 @@ in
wantedBy = [ "network-online.target" ]; wantedBy = [ "network-online.target" ];
serviceConfig = { serviceConfig = {
User = "root"; User = "root";
ExecStart = "${pkgs.python310Packages.huey}/bin/huey_consumer.py selfprivacy_api.task_registry.huey"; ExecStart = "${pkgs.python312Packages.huey}/bin/huey_consumer.py selfprivacy_api.task_registry.huey";
Restart = "always"; Restart = "always";
RestartSec = "5"; RestartSec = "5";
}; };

View file

@ -1,7 +1,8 @@
""" """
App tokens actions. App tokens actions.
The only actions on tokens that are accessible from APIs The only actions on tokens that are accessible from APIs
""" """
from datetime import datetime, timezone from datetime import datetime, timezone
from typing import Optional from typing import Optional
from pydantic import BaseModel from pydantic import BaseModel

View file

@ -0,0 +1,34 @@
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.jobs import Jobs, Job
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services.tasks import move_service as move_service_task
class ServiceNotFoundError(Exception):
pass
class VolumeNotFoundError(Exception):
pass
def move_service(service_id: str, volume_name: str) -> Job:
service = ServiceManager.get_service_by_id(service_id)
if service is None:
raise ServiceNotFoundError(f"No such service:{service_id}")
volume = BlockDevices().get_block_device(volume_name)
if volume is None:
raise VolumeNotFoundError(f"No such volume:{volume_name}")
service.assert_can_move(volume)
job = Jobs.add(
type_id=f"services.{service.get_id()}.move",
name=f"Move {service.get_display_name()}",
description=f"Moving {service.get_display_name()} data to {volume.get_display_name().lower()}",
)
move_service_task(service, volume, job)
return job

View file

@ -1,4 +1,5 @@
"""Actions to manage the SSH.""" """Actions to manage the SSH."""
from typing import Optional from typing import Optional
from pydantic import BaseModel from pydantic import BaseModel
from selfprivacy_api.actions.users import ( from selfprivacy_api.actions.users import (

View file

@ -1,11 +1,18 @@
"""Actions to manage the system.""" """Actions to manage the system."""
import os import os
import subprocess import subprocess
import pytz import pytz
from typing import Optional, List from typing import Optional, List
from pydantic import BaseModel from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.jobs.upgrade_system import rebuild_system_task
from selfprivacy_api.utils import WriteUserData, ReadUserData from selfprivacy_api.utils import WriteUserData, ReadUserData
from selfprivacy_api.utils import UserDataFiles
from selfprivacy_api.graphql.queries.providers import DnsProvider
def get_timezone() -> str: def get_timezone() -> str:
@ -37,6 +44,18 @@ class UserDataAutoUpgradeSettings(BaseModel):
allowReboot: bool = False allowReboot: bool = False
def set_dns_provider(provider: DnsProvider, token: str):
with WriteUserData() as user_data:
if "dns" not in user_data.keys():
user_data["dns"] = {}
user_data["dns"]["provider"] = provider.value
with WriteUserData(file_type=UserDataFiles.SECRETS) as secrets:
if "dns" not in secrets.keys():
secrets["dns"] = {}
secrets["dns"]["apiKey"] = token
def get_auto_upgrade_settings() -> UserDataAutoUpgradeSettings: def get_auto_upgrade_settings() -> UserDataAutoUpgradeSettings:
"""Get the auto-upgrade settings""" """Get the auto-upgrade settings"""
with ReadUserData() as user_data: with ReadUserData() as user_data:
@ -46,14 +65,14 @@ def get_auto_upgrade_settings() -> UserDataAutoUpgradeSettings:
def set_auto_upgrade_settings( def set_auto_upgrade_settings(
enalbe: Optional[bool] = None, allowReboot: Optional[bool] = None enable: Optional[bool] = None, allowReboot: Optional[bool] = None
) -> None: ) -> None:
"""Set the auto-upgrade settings""" """Set the auto-upgrade settings"""
with WriteUserData() as user_data: with WriteUserData() as user_data:
if "autoUpgrade" not in user_data: if "autoUpgrade" not in user_data:
user_data["autoUpgrade"] = {} user_data["autoUpgrade"] = {}
if enalbe is not None: if enable is not None:
user_data["autoUpgrade"]["enable"] = enalbe user_data["autoUpgrade"]["enable"] = enable
if allowReboot is not None: if allowReboot is not None:
user_data["autoUpgrade"]["allowReboot"] = allowReboot user_data["autoUpgrade"]["allowReboot"] = allowReboot
@ -87,10 +106,20 @@ def run_blocking(cmd: List[str], new_session: bool = False) -> str:
return stdout return stdout
def rebuild_system() -> int: def add_rebuild_job() -> Job:
return Jobs.add(
type_id="system.nixos.rebuild",
name="Rebuild system",
description="Applying the new system configuration by building the new NixOS generation.",
status=JobStatus.CREATED,
)
def rebuild_system() -> Job:
"""Rebuild the system""" """Rebuild the system"""
run_blocking(["systemctl", "start", "sp-nixos-rebuild.service"], new_session=True) job = add_rebuild_job()
return 0 rebuild_system_task(job)
return job
def rollback_system() -> int: def rollback_system() -> int:
@ -99,10 +128,16 @@ def rollback_system() -> int:
return 0 return 0
def upgrade_system() -> int: def upgrade_system() -> Job:
"""Upgrade the system""" """Upgrade the system"""
run_blocking(["systemctl", "start", "sp-nixos-upgrade.service"], new_session=True) job = Jobs.add(
return 0 type_id="system.nixos.upgrade",
name="Upgrade system",
description="Upgrading the system to the latest version.",
status=JobStatus.CREATED,
)
rebuild_system_task(job, upgrade=True)
return job
def reboot_system() -> None: def reboot_system() -> None:

View file

@ -1,4 +1,5 @@
"""Actions to manage the users.""" """Actions to manage the users."""
import re import re
from typing import Optional from typing import Optional
from pydantic import BaseModel from pydantic import BaseModel

View file

@ -3,6 +3,7 @@
from fastapi import FastAPI from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from strawberry.fastapi import GraphQLRouter from strawberry.fastapi import GraphQLRouter
from strawberry.subscriptions import GRAPHQL_TRANSPORT_WS_PROTOCOL, GRAPHQL_WS_PROTOCOL
import uvicorn import uvicorn
@ -13,8 +14,12 @@ from selfprivacy_api.migrations import run_migrations
app = FastAPI() app = FastAPI()
graphql_app = GraphQLRouter( graphql_app: GraphQLRouter = GraphQLRouter(
schema, schema,
subscription_protocols=[
GRAPHQL_TRANSPORT_WS_PROTOCOL,
GRAPHQL_WS_PROTOCOL,
],
) )
app.add_middleware( app.add_middleware(

View file

@ -1,16 +1,16 @@
""" """
This module contains the controller class for backups. This module contains the controller class for backups.
""" """
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
import time import time
import os import os
from os import statvfs from os import statvfs
from typing import Callable, List, Optional from typing import Callable, List, Optional
from os.path import exists
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services import (
get_service_by_id,
get_all_services,
)
from selfprivacy_api.services.service import ( from selfprivacy_api.services.service import (
Service, Service,
ServiceStatus, ServiceStatus,
@ -30,6 +30,7 @@ from selfprivacy_api.graphql.common_types.backup import (
from selfprivacy_api.models.backup.snapshot import Snapshot from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers import get_provider from selfprivacy_api.backup.providers import get_provider
@ -259,7 +260,7 @@ class Backups:
Backups._prune_auto_snaps(service) Backups._prune_auto_snaps(service)
service.post_restore() service.post_restore()
except Exception as error: except Exception as error:
Jobs.update(job, status=JobStatus.ERROR, status_text=str(error)) Jobs.update(job, status=JobStatus.ERROR, error=str(error))
raise error raise error
Jobs.update(job, status=JobStatus.FINISHED) Jobs.update(job, status=JobStatus.FINISHED)
@ -274,10 +275,16 @@ class Backups:
This is a convenience, maybe it is better to write a special comparison This is a convenience, maybe it is better to write a special comparison
function for snapshots function for snapshots
""" """
return Storage.get_cached_snapshot_by_id(snapshot.id)
snap = Storage.get_cached_snapshot_by_id(snapshot.id)
if snap is None:
raise ValueError(
f"snapshot {snapshot.id} date syncing failed, this should never happen normally"
)
return snap
@staticmethod @staticmethod
def _auto_snaps(service): def _auto_snaps(service) -> List[Snapshot]:
return [ return [
snap snap
for snap in Backups.get_snapshots(service) for snap in Backups.get_snapshots(service)
@ -375,7 +382,7 @@ class Backups:
@staticmethod @staticmethod
def prune_all_autosnaps() -> None: def prune_all_autosnaps() -> None:
for service in get_all_services(): for service in ServiceManager.get_all_services():
Backups._prune_auto_snaps(service) Backups._prune_auto_snaps(service)
# Restoring # Restoring
@ -430,7 +437,7 @@ class Backups:
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
) -> None: ) -> None:
"""Restores a snapshot to its original service using the given strategy""" """Restores a snapshot to its original service using the given strategy"""
service = get_service_by_id(snapshot.service_name) service = ServiceManager.get_service_by_id(snapshot.service_name)
if service is None: if service is None:
raise ValueError( raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}" f"snapshot has a nonexistent service: {snapshot.service_name}"
@ -443,7 +450,8 @@ class Backups:
job, status=JobStatus.RUNNING, status_text="Stopping the service" job, status=JobStatus.RUNNING, status_text="Stopping the service"
) )
with StoppedService(service): with StoppedService(service):
Backups.assert_dead(service) if not service.is_always_active():
Backups.assert_dead(service)
if strategy == RestoreStrategy.INPLACE: if strategy == RestoreStrategy.INPLACE:
Backups._inplace_restore(service, snapshot, job) Backups._inplace_restore(service, snapshot, job)
else: # verify_before_download is our default else: # verify_before_download is our default
@ -474,7 +482,7 @@ class Backups:
def _assert_restorable( def _assert_restorable(
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
) -> None: ) -> None:
service = get_service_by_id(snapshot.service_name) service = ServiceManager.get_service_by_id(snapshot.service_name)
if service is None: if service is None:
raise ValueError( raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}" f"snapshot has a nonexistent service: {snapshot.service_name}"
@ -645,7 +653,7 @@ class Backups:
"""Returns a list of services that should be backed up at a given time""" """Returns a list of services that should be backed up at a given time"""
return [ return [
service service
for service in get_all_services() for service in ServiceManager.get_all_services()
if Backups.is_time_to_backup_service(service, time) if Backups.is_time_to_backup_service(service, time)
] ]
@ -712,8 +720,18 @@ class Backups:
raise ValueError("unallocated service", service.get_id()) raise ValueError("unallocated service", service.get_id())
# We assume all folders of one service live at the same volume # We assume all folders of one service live at the same volume
fs_info = statvfs(folders[0]) example_folder = folders[0]
usable_bytes = fs_info.f_frsize * fs_info.f_bavail if exists(example_folder):
fs_info = statvfs(example_folder)
usable_bytes = fs_info.f_frsize * fs_info.f_bavail
else:
# Look at the block device as it is written in settings
label = service.get_drive()
device = BlockDevices().get_block_device(label)
if device is None:
raise ValueError("nonexistent drive ", label, " for ", service.get_id())
usable_bytes = int(device.fsavail)
return usable_bytes return usable_bytes
@staticmethod @staticmethod
@ -739,3 +757,52 @@ class Backups:
ServiceStatus.FAILED, ServiceStatus.FAILED,
]: ]:
raise NotDeadError(service) raise NotDeadError(service)
@staticmethod
def is_same_slice(snap1: Snapshot, snap2: Snapshot) -> bool:
# Determines if the snaps were made roughly in the same time period
period_minutes = Backups.autobackup_period_minutes()
# Autobackups are not guaranteed to be enabled during restore.
# If they are not, period will be none
# We ASSUME that picking latest snap of the same day is safe enough
# But it is potentlially problematic and is better done with metadata I think.
if period_minutes is None:
period_minutes = 24 * 60
if snap1.created_at > snap2.created_at + timedelta(minutes=period_minutes):
return False
if snap1.created_at < snap2.created_at - timedelta(minutes=period_minutes):
return False
return True
@staticmethod
def last_backup_slice() -> List[Snapshot]:
"""
Guarantees that the slice is valid, ie, it has an api snapshot too
Or empty
"""
slice: List[Snapshot] = []
# We need snapshots that were made around the same time.
# And we need to be sure that api snap is in there
# That's why we form the slice around api snap
api_snaps = Backups.get_snapshots(ServiceManager())
if api_snaps == []:
return []
api_snaps.sort(key=lambda x: x.created_at, reverse=True)
api_snap = api_snaps[0] # pick the latest one
for service in ServiceManager.get_all_services():
if isinstance(service, ServiceManager):
continue
snaps = Backups.get_snapshots(service)
snaps.sort(key=lambda x: x.created_at, reverse=True)
for snap in snaps:
if Backups.is_same_slice(snap, api_snap):
slice.append(snap)
break
slice.append(api_snap)
return slice

View file

@ -11,18 +11,20 @@ from json.decoder import JSONDecodeError
from os.path import exists, join from os.path import exists, join
from os import mkdir from os import mkdir
from shutil import rmtree from shutil import rmtree
from selfprivacy_api.utils.waitloop import wait_until_success
from selfprivacy_api.graphql.common_types.backup import BackupReason from selfprivacy_api.graphql.common_types.backup import BackupReason
from selfprivacy_api.backup.util import output_yielder, sync from selfprivacy_api.backup.util import output_yielder, sync
from selfprivacy_api.backup.backuppers import AbstractBackupper from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.models.backup.snapshot import Snapshot from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.jobs import get_backup_job from selfprivacy_api.backup.jobs import get_backup_job
from selfprivacy_api.services import get_service_by_id from selfprivacy_api.services import ServiceManager
from selfprivacy_api.jobs import Jobs, JobStatus, Job from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.backup.local_secret import LocalBackupSecret from selfprivacy_api.backup.local_secret import LocalBackupSecret
SHORT_ID_LEN = 8 SHORT_ID_LEN = 8
FILESYSTEM_TIMEOUT_SEC = 60
T = TypeVar("T", bound=Callable) T = TypeVar("T", bound=Callable)
@ -172,9 +174,24 @@ class ResticBackupper(AbstractBackupper):
return messages return messages
@staticmethod
def _replace_in_array(array: List[str], target, replacement) -> None:
if target == "":
return
for i, value in enumerate(array):
if target in value:
array[i] = array[i].replace(target, replacement)
def _censor_command(self, command: List[str]) -> List[str]:
result = command.copy()
ResticBackupper._replace_in_array(result, self.key, "CENSORED")
ResticBackupper._replace_in_array(result, LocalBackupSecret.get(), "CENSORED")
return result
@staticmethod @staticmethod
def _get_backup_job(service_name: str) -> Optional[Job]: def _get_backup_job(service_name: str) -> Optional[Job]:
service = get_service_by_id(service_name) service = ServiceManager.get_service_by_id(service_name)
if service is None: if service is None:
raise ValueError("No service with id ", service_name) raise ValueError("No service with id ", service_name)
@ -218,7 +235,7 @@ class ResticBackupper(AbstractBackupper):
"Could not create a snapshot: ", "Could not create a snapshot: ",
str(error), str(error),
"command: ", "command: ",
backup_command, self._censor_command(backup_command),
) from error ) from error
@staticmethod @staticmethod
@ -376,7 +393,9 @@ class ResticBackupper(AbstractBackupper):
else: # attempting inplace restore else: # attempting inplace restore
for folder in folders: for folder in folders:
rmtree(folder) wait_until_success(
lambda: rmtree(folder), timeout_sec=FILESYSTEM_TIMEOUT_SEC
)
mkdir(folder) mkdir(folder)
self._raw_verified_restore(snapshot_id, target="/") self._raw_verified_restore(snapshot_id, target="/")
return return

View file

@ -3,7 +3,7 @@ from typing import Optional, List
from selfprivacy_api.models.backup.snapshot import Snapshot from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.jobs import Jobs, Job, JobStatus from selfprivacy_api.jobs import Jobs, Job, JobStatus
from selfprivacy_api.services.service import Service from selfprivacy_api.services.service import Service
from selfprivacy_api.services import get_service_by_id from selfprivacy_api.services import ServiceManager
def job_type_prefix(service: Service) -> str: def job_type_prefix(service: Service) -> str:
@ -14,6 +14,10 @@ def backup_job_type(service: Service) -> str:
return f"{job_type_prefix(service)}.backup" return f"{job_type_prefix(service)}.backup"
def autobackup_job_type() -> str:
return "backups.autobackup"
def restore_job_type(service: Service) -> str: def restore_job_type(service: Service) -> str:
return f"{job_type_prefix(service)}.restore" return f"{job_type_prefix(service)}.restore"
@ -36,6 +40,17 @@ def is_something_running_for(service: Service) -> bool:
return len(running_jobs) != 0 return len(running_jobs) != 0
def add_autobackup_job(services: List[Service]) -> Job:
service_names = [s.get_display_name() for s in services]
pretty_service_list: str = ", ".join(service_names)
job = Jobs.add(
type_id=autobackup_job_type(),
name="Automatic backup",
description=f"Scheduled backup for services: {pretty_service_list}",
)
return job
def add_backup_job(service: Service) -> Job: def add_backup_job(service: Service) -> Job:
if is_something_running_for(service): if is_something_running_for(service):
message = ( message = (
@ -52,21 +67,55 @@ def add_backup_job(service: Service) -> Job:
return job return job
def complain_about_service_operation_running(service: Service) -> str:
message = f"Cannot start a restore of {service.get_id()}, another operation is running: {get_jobs_by_service(service)[0].type_id}"
raise ValueError(message)
def add_total_restore_job() -> Job:
for service in ServiceManager.get_enabled_services():
ensure_nothing_runs_for(service)
job = Jobs.add(
type_id="backups.total_restore",
name=f"Total restore",
description="Restoring all enabled services",
)
return job
def ensure_nothing_runs_for(service: Service):
if (
# TODO: try removing the exception. Why would we have it?
not isinstance(service, ServiceManager)
and is_something_running_for(service) is True
):
complain_about_service_operation_running(service)
def add_total_backup_job() -> Job:
for service in ServiceManager.get_enabled_services():
ensure_nothing_runs_for(service)
job = Jobs.add(
type_id="backups.total_backup",
name=f"Total backup",
description="Backing up all the enabled services",
)
return job
def add_restore_job(snapshot: Snapshot) -> Job: def add_restore_job(snapshot: Snapshot) -> Job:
service = get_service_by_id(snapshot.service_name) service = ServiceManager.get_service_by_id(snapshot.service_name)
if service is None: if service is None:
raise ValueError(f"no such service: {snapshot.service_name}") raise ValueError(f"no such service: {snapshot.service_name}")
if is_something_running_for(service): if is_something_running_for(service):
message = ( complain_about_service_operation_running(service)
f"Cannot start a restore of {service.get_id()}, another operation is running: "
+ get_jobs_by_service(service)[0].type_id
)
raise ValueError(message)
display_name = service.get_display_name() display_name = service.get_display_name()
job = Jobs.add( job = Jobs.add(
type_id=restore_job_type(service), type_id=restore_job_type(service),
name=f"Restore {display_name}", name=f"Restore {display_name}",
description=f"restoring {display_name} from {snapshot.id}", description=f"Restoring {display_name} from {snapshot.id}",
) )
return job return job
@ -78,12 +127,14 @@ def get_job_by_type(type_id: str) -> Optional[Job]:
JobStatus.RUNNING, JobStatus.RUNNING,
]: ]:
return job return job
return None
def get_failed_job_by_type(type_id: str) -> Optional[Job]: def get_failed_job_by_type(type_id: str) -> Optional[Job]:
for job in Jobs.get_jobs(): for job in Jobs.get_jobs():
if job.type_id == type_id and job.status == JobStatus.ERROR: if job.type_id == type_id and job.status == JobStatus.ERROR:
return job return job
return None
def get_backup_job(service: Service) -> Optional[Job]: def get_backup_job(service: Service) -> Optional[Job]:

View file

@ -21,6 +21,8 @@ PROVIDER_MAPPING: dict[BackupProviderEnum, Type[AbstractBackupProvider]] = {
def get_provider( def get_provider(
provider_type: BackupProviderEnum, provider_type: BackupProviderEnum,
) -> Type[AbstractBackupProvider]: ) -> Type[AbstractBackupProvider]:
if provider_type not in PROVIDER_MAPPING.keys():
raise LookupError("could not look up provider", provider_type)
return PROVIDER_MAPPING[provider_type] return PROVIDER_MAPPING[provider_type]

View file

@ -3,7 +3,8 @@ An abstract class for BackBlaze, S3 etc.
It assumes that while some providers are supported via restic/rclone, others It assumes that while some providers are supported via restic/rclone, others
may require different backends may require different backends
""" """
from abc import ABC, abstractmethod
from abc import ABC
from selfprivacy_api.backup.backuppers import AbstractBackupper from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.graphql.queries.providers import ( from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum, BackupProvider as BackupProviderEnum,

View file

@ -1,6 +1,7 @@
""" """
Module for storing backup related data in redis. Module for storing backup related data in redis.
""" """
from typing import List, Optional from typing import List, Optional
from datetime import datetime from datetime import datetime

View file

@ -1,7 +1,9 @@
""" """
The tasks module contains the worker tasks that are used to back up and restore The tasks module contains the worker tasks that are used to back up and restore
""" """
from datetime import datetime, timezone from datetime import datetime, timezone
from typing import List
from selfprivacy_api.graphql.common_types.backup import ( from selfprivacy_api.graphql.common_types.backup import (
RestoreStrategy, RestoreStrategy,
@ -12,10 +14,12 @@ from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils.huey import huey from selfprivacy_api.utils.huey import huey
from huey import crontab from huey import crontab
from selfprivacy_api.services.service import Service from selfprivacy_api.services import ServiceManager, Service
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.backup import Backups from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.jobs import add_autobackup_job
from selfprivacy_api.jobs import Jobs, JobStatus, Job from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.jobs.upgrade_system import rebuild_system
from selfprivacy_api.actions.system import add_rebuild_job
SNAPSHOT_CACHE_TTL_HOURS = 6 SNAPSHOT_CACHE_TTL_HOURS = 6
@ -31,13 +35,21 @@ def validate_datetime(dt: datetime) -> bool:
return Backups.is_time_to_backup(dt) return Backups.is_time_to_backup(dt)
def report_job_error(error: Exception, job: Job):
Jobs.update(
job,
status=JobStatus.ERROR,
error=type(error).__name__ + ": " + str(error),
)
# huey tasks need to return something # huey tasks need to return something
@huey.task() @huey.task()
def start_backup(service_id: str, reason: BackupReason = BackupReason.EXPLICIT) -> bool: def start_backup(service_id: str, reason: BackupReason = BackupReason.EXPLICIT) -> bool:
""" """
The worker task that starts the backup process. The worker task that starts the backup process.
""" """
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
raise ValueError(f"No such service: {service_id}") raise ValueError(f"No such service: {service_id}")
Backups.back_up(service, reason) Backups.back_up(service, reason)
@ -72,28 +84,153 @@ def restore_snapshot(
return True return True
def do_autobackup(): @huey.task()
""" def full_restore(job: Job) -> bool:
Body of autobackup task, broken out to test it do_full_restore(job)
For some reason, we cannot launch periodic huey tasks return True
inside tests
"""
time = datetime.utcnow().replace(tzinfo=timezone.utc)
for service in Backups.services_to_back_up(time):
handle = start_backup(service.get_id(), BackupReason.AUTO)
# To be on safe side, we do not do it in parallel
handle(blocking=True)
@huey.periodic_task(validate_datetime=validate_datetime) @huey.periodic_task(validate_datetime=validate_datetime)
def automatic_backup() -> bool: def automatic_backup() -> None:
""" """
The worker periodic task that starts the automatic backup process. The worker periodic task that starts the automatic backup process.
""" """
do_autobackup() do_autobackup()
@huey.task()
def total_backup(job: Job) -> bool:
do_total_backup(job)
return True return True
@huey.periodic_task(crontab(hour="*/" + str(SNAPSHOT_CACHE_TTL_HOURS))) @huey.periodic_task(crontab(hour="*/" + str(SNAPSHOT_CACHE_TTL_HOURS)))
def reload_snapshot_cache(): def reload_snapshot_cache():
Backups.force_snapshot_cache_reload() Backups.force_snapshot_cache_reload()
def back_up_multiple(
job: Job,
services_to_back_up: List[Service],
reason: BackupReason = BackupReason.EXPLICIT,
):
if services_to_back_up == []:
return
progress_per_service = 100 // len(services_to_back_up)
progress = 0
Jobs.update(job, JobStatus.RUNNING, progress=progress)
for service in services_to_back_up:
try:
Backups.back_up(service, reason)
except Exception as error:
report_job_error(error, job)
raise error
progress = progress + progress_per_service
Jobs.update(job, JobStatus.RUNNING, progress=progress)
def do_total_backup(job: Job) -> None:
"""
Body of total backup task, broken out to test it
"""
back_up_multiple(job, ServiceManager.get_enabled_services())
Jobs.update(job, JobStatus.FINISHED)
def do_autobackup() -> None:
"""
Body of autobackup task, broken out to test it
For some reason, we cannot launch periodic huey tasks
inside tests
"""
time = datetime.now(timezone.utc)
backups_were_disabled = Backups.autobackup_period_minutes() is None
if backups_were_disabled:
# Temporarily enable autobackup
Backups.set_autobackup_period_minutes(24 * 60) # 1 day
services_to_back_up = Backups.services_to_back_up(time)
if not services_to_back_up:
return
job = add_autobackup_job(services_to_back_up)
back_up_multiple(job, services_to_back_up, BackupReason.AUTO)
if backups_were_disabled:
Backups.set_autobackup_period_minutes(0)
Jobs.update(job, JobStatus.FINISHED)
# there is no point of returning the job
# this code is called with a delay
def eligible_for_full_restoration(snap: Snapshot):
service = ServiceManager.get_service_by_id(snap.service_name)
if service is None:
return False
if service.is_enabled() is False:
return False
return True
def which_snapshots_to_full_restore() -> list[Snapshot]:
autoslice = Backups.last_backup_slice()
api_snapshot = None
for snap in autoslice:
if snap.service_name == "api":
api_snapshot = snap
autoslice.remove(snap)
if api_snapshot is None:
raise ValueError(
"Cannot restore, no configuration snapshot found. This particular error should be unreachable"
)
snapshots_to_restore = [
snap for snap in autoslice if eligible_for_full_restoration(snap)
]
# API should be restored in the very end of the list because it requires rebuild right afterwards
snapshots_to_restore.append(api_snapshot)
return snapshots_to_restore
def do_full_restore(job: Job) -> None:
"""
Body full restore task, a part of server migration.
Broken out to test it independently from task infra
"""
Jobs.update(
job,
JobStatus.RUNNING,
status_text="Finding the last autobackup session",
progress=0,
)
snapshots_to_restore = which_snapshots_to_full_restore()
progress_per_service = 99 // len(snapshots_to_restore)
progress = 0
Jobs.update(job, JobStatus.RUNNING, progress=progress)
for snap in snapshots_to_restore:
try:
Backups.restore_snapshot(snap)
except Exception as error:
report_job_error(error, job)
progress = progress + progress_per_service
Jobs.update(
job,
JobStatus.RUNNING,
status_text=f"restoring {snap.service_name}",
progress=progress,
)
Jobs.update(job, JobStatus.RUNNING, status_text="rebuilding system", progress=99)
# Adding a separate job to not confuse the user with jumping progress bar
rebuild_job = add_rebuild_job()
rebuild_system(rebuild_job)
Jobs.update(job, JobStatus.FINISHED)

View file

@ -27,4 +27,4 @@ async def get_token_header(
def get_api_version() -> str: def get_api_version() -> str:
"""Get API version""" """Get API version"""
return "3.0.1" return "3.3.0"

View file

@ -1,4 +1,5 @@
"""GraphQL API for SelfPrivacy.""" """GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
from strawberry.permission import BasePermission from strawberry.permission import BasePermission
@ -16,6 +17,10 @@ class IsAuthenticated(BasePermission):
token = info.context["request"].headers.get("Authorization") token = info.context["request"].headers.get("Authorization")
if token is None: if token is None:
token = info.context["request"].query_params.get("token") token = info.context["request"].query_params.get("token")
if token is None:
connection_params = info.context.get("connection_params")
if connection_params is not None:
token = connection_params.get("Authorization")
if token is None: if token is None:
return False return False
return is_token_valid(token.replace("Bearer ", "")) return is_token_valid(token.replace("Bearer ", ""))

View file

@ -1,4 +1,5 @@
"""Backup""" """Backup"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
from enum import Enum from enum import Enum
import strawberry import strawberry

View file

@ -2,6 +2,7 @@ import typing
import strawberry import strawberry
# TODO: use https://strawberry.rocks/docs/integrations/pydantic when it is stable
@strawberry.type @strawberry.type
class DnsRecord: class DnsRecord:
"""DNS record""" """DNS record"""

View file

@ -1,4 +1,5 @@
"""Jobs status""" """Jobs status"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import datetime import datetime
import typing import typing

View file

@ -1,13 +1,17 @@
from enum import Enum from enum import Enum
import typing from typing import Optional, List
import strawberry
import datetime import datetime
import strawberry
from selfprivacy_api.graphql.common_types.backup import BackupReason from selfprivacy_api.graphql.common_types.backup import BackupReason
from selfprivacy_api.graphql.common_types.dns import DnsRecord from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.services import get_service_by_id, get_services_by_location from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services import Service as ServiceInterface from selfprivacy_api.services import Service as ServiceInterface
from selfprivacy_api.services import ServiceDnsRecord
from selfprivacy_api.utils.block_devices import BlockDevices from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.utils.network import get_ip4, get_ip6
def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]: def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
@ -19,7 +23,7 @@ def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
used_space=str(service.get_storage_usage()), used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_drive()), volume=get_volume_by_id(service.get_drive()),
) )
for service in get_services_by_location(root.name) for service in ServiceManager.get_services_by_location(root.name)
] ]
@ -32,8 +36,8 @@ class StorageVolume:
used_space: str used_space: str
root: bool root: bool
name: str name: str
model: typing.Optional[str] model: Optional[str]
serial: typing.Optional[str] serial: Optional[str]
type: str type: str
@strawberry.field @strawberry.field
@ -45,7 +49,7 @@ class StorageVolume:
@strawberry.interface @strawberry.interface
class StorageUsageInterface: class StorageUsageInterface:
used_space: str used_space: str
volume: typing.Optional[StorageVolume] volume: Optional[StorageVolume]
title: str title: str
@ -53,7 +57,7 @@ class StorageUsageInterface:
class ServiceStorageUsage(StorageUsageInterface): class ServiceStorageUsage(StorageUsageInterface):
"""Storage usage for a service""" """Storage usage for a service"""
service: typing.Optional["Service"] service: Optional["Service"]
@strawberry.enum @strawberry.enum
@ -69,7 +73,7 @@ class ServiceStatusEnum(Enum):
def get_storage_usage(root: "Service") -> ServiceStorageUsage: def get_storage_usage(root: "Service") -> ServiceStorageUsage:
"""Get storage usage for a service""" """Get storage usage for a service"""
service = get_service_by_id(root.id) service = ServiceManager.get_service_by_id(root.id)
if service is None: if service is None:
return ServiceStorageUsage( return ServiceStorageUsage(
service=service, service=service,
@ -85,6 +89,83 @@ def get_storage_usage(root: "Service") -> ServiceStorageUsage:
) )
# TODO: This won't be needed when deriving DnsRecord via strawberry pydantic integration
# https://strawberry.rocks/docs/integrations/pydantic
# Remove when the link above says it got stable.
def service_dns_to_graphql(record: ServiceDnsRecord) -> DnsRecord:
return DnsRecord(
record_type=record.type,
name=record.name,
content=record.content,
ttl=record.ttl,
priority=record.priority,
display_name=record.display_name,
)
@strawberry.interface
class ConfigItem:
field_id: str
description: str
widget: str
type: str
@strawberry.type
class StringConfigItem(ConfigItem):
value: str
default_value: str
regex: Optional[str]
@strawberry.type
class BoolConfigItem(ConfigItem):
value: bool
default_value: bool
@strawberry.type
class EnumConfigItem(ConfigItem):
value: str
default_value: str
options: list[str]
def config_item_to_graphql(item: dict) -> ConfigItem:
item_type = item.get("type")
if item_type == "string":
return StringConfigItem(
field_id=item["id"],
description=item["description"],
widget=item["widget"],
type=item_type,
value=item["value"],
default_value=item["default_value"],
regex=item.get("regex"),
)
elif item_type == "bool":
return BoolConfigItem(
field_id=item["id"],
description=item["description"],
widget=item["widget"],
type=item_type,
value=item["value"],
default_value=item["default_value"],
)
elif item_type == "enum":
return EnumConfigItem(
field_id=item["id"],
description=item["description"],
widget=item["widget"],
type=item_type,
value=item["value"],
default_value=item["default_value"],
options=item["options"],
)
else:
raise ValueError(f"Unknown config item type {item_type}")
@strawberry.type @strawberry.type
class Service: class Service:
id: str id: str
@ -94,11 +175,21 @@ class Service:
is_movable: bool is_movable: bool
is_required: bool is_required: bool
is_enabled: bool is_enabled: bool
is_installed: bool
can_be_backed_up: bool can_be_backed_up: bool
backup_description: str backup_description: str
status: ServiceStatusEnum status: ServiceStatusEnum
url: typing.Optional[str] url: Optional[str]
dns_records: typing.Optional[typing.List[DnsRecord]]
@strawberry.field
def dns_records(self) -> Optional[List[DnsRecord]]:
service = ServiceManager.get_service_by_id(self.id)
if service is None:
raise LookupError(f"no service {self.id}. Should be unreachable")
raw_records = service.get_dns_records(get_ip4(), get_ip6())
dns_records = [service_dns_to_graphql(record) for record in raw_records]
return dns_records
@strawberry.field @strawberry.field
def storage_usage(self) -> ServiceStorageUsage: def storage_usage(self) -> ServiceStorageUsage:
@ -106,7 +197,21 @@ class Service:
return get_storage_usage(self) return get_storage_usage(self)
@strawberry.field @strawberry.field
def backup_snapshots(self) -> typing.Optional[typing.List["SnapshotInfo"]]: def configuration(self) -> Optional[List[ConfigItem]]:
"""Get service configuration"""
service = ServiceManager.get_service_by_id(self.id)
if service is None:
return None
config_items = service.get_configuration()
# If it is an empty dict, return none
if not config_items:
return None
# By the "type" field convert every dict into a ConfigItem. In the future there will be more types.
return [config_item_to_graphql(config_items[item]) for item in config_items]
# TODO: fill this
@strawberry.field
def backup_snapshots(self) -> Optional[List["SnapshotInfo"]]:
return None return None
@ -128,33 +233,23 @@ def service_to_graphql_service(service: ServiceInterface) -> Service:
is_movable=service.is_movable(), is_movable=service.is_movable(),
is_required=service.is_required(), is_required=service.is_required(),
is_enabled=service.is_enabled(), is_enabled=service.is_enabled(),
is_installed=service.is_installed(),
can_be_backed_up=service.can_be_backed_up(), can_be_backed_up=service.can_be_backed_up(),
backup_description=service.get_backup_description(), backup_description=service.get_backup_description(),
status=ServiceStatusEnum(service.get_status().value), status=ServiceStatusEnum(service.get_status().value),
url=service.get_url(), url=service.get_url(),
dns_records=[
DnsRecord(
record_type=record.type,
name=record.name,
content=record.content,
ttl=record.ttl,
priority=record.priority,
display_name=record.display_name,
)
for record in service.get_dns_records()
],
) )
def get_volume_by_id(volume_id: str) -> typing.Optional[StorageVolume]: def get_volume_by_id(volume_id: str) -> Optional[StorageVolume]:
"""Get volume by id""" """Get volume by id"""
volume = BlockDevices().get_block_device(volume_id) volume = BlockDevices().get_block_device(volume_id)
if volume is None: if volume is None:
return None return None
return StorageVolume( return StorageVolume(
total_space=str(volume.fssize) total_space=(
if volume.fssize is not None str(volume.fssize) if volume.fssize is not None else str(volume.size)
else str(volume.size), ),
free_space=str(volume.fsavail), free_space=str(volume.fsavail),
used_space=str(volume.fsused), used_space=str(volume.fsused),
root=volume.name == "sda1", root=volume.name == "sda1",

View file

@ -1,4 +1,5 @@
"""API access mutations""" """API access mutations"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import datetime import datetime
import typing import typing

View file

@ -1,6 +1,8 @@
import typing import typing
import strawberry import strawberry
from selfprivacy_api.utils.graphql import api_job_mutation_error
from selfprivacy_api.jobs import Jobs from selfprivacy_api.jobs import Jobs
from selfprivacy_api.graphql import IsAuthenticated from selfprivacy_api.graphql import IsAuthenticated
@ -19,13 +21,21 @@ from selfprivacy_api.graphql.common_types.backup import (
) )
from selfprivacy_api.backup import Backups from selfprivacy_api.backup import Backups
from selfprivacy_api.services import get_service_by_id from selfprivacy_api.services import ServiceManager
from selfprivacy_api.backup.tasks import ( from selfprivacy_api.backup.tasks import (
start_backup, start_backup,
restore_snapshot, restore_snapshot,
prune_autobackup_snapshots, prune_autobackup_snapshots,
full_restore,
total_backup,
) )
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job from selfprivacy_api.backup.jobs import (
add_backup_job,
add_restore_job,
add_total_restore_job,
add_total_backup_job,
)
from selfprivacy_api.backup.local_secret import LocalBackupSecret
@strawberry.input @strawberry.input
@ -40,6 +50,8 @@ class InitializeRepositoryInput:
# Key ID and key for Backblaze # Key ID and key for Backblaze
login: str login: str
password: str password: str
# For migration. If set, no new secret is generated
local_secret: typing.Optional[str] = None
@strawberry.type @strawberry.type
@ -63,7 +75,13 @@ class BackupMutations:
location=repository.location_name, location=repository.location_name,
repo_id=repository.location_id, repo_id=repository.location_id,
) )
Backups.init_repo()
secret = repository.local_secret
if secret is not None:
LocalBackupSecret.set(secret)
Backups.force_snapshot_cache_reload()
else:
Backups.init_repo()
return GenericBackupConfigReturn( return GenericBackupConfigReturn(
success=True, success=True,
message="", message="",
@ -138,7 +156,7 @@ class BackupMutations:
def start_backup(self, service_id: str) -> GenericJobMutationReturn: def start_backup(self, service_id: str) -> GenericJobMutationReturn:
"""Start backup""" """Start backup"""
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
return GenericJobMutationReturn( return GenericJobMutationReturn(
success=False, success=False,
@ -157,6 +175,50 @@ class BackupMutations:
job=job_to_api_job(job), job=job_to_api_job(job),
) )
@strawberry.mutation(permission_classes=[IsAuthenticated])
def total_backup(self) -> GenericJobMutationReturn:
"""Back up all the enabled services at once
Useful when migrating
"""
try:
job = add_total_backup_job()
total_backup(job)
except Exception as error:
return api_job_mutation_error(error)
return GenericJobMutationReturn(
success=True,
code=200,
message="Total backup task queued",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restore_all(self) -> GenericJobMutationReturn:
"""
Restore all restorable and enabled services according to last autobackup snapshots
This happens in sync with partial merging of old configuration for compatibility
"""
try:
job = add_total_restore_job()
full_restore(job)
except Exception as error:
return GenericJobMutationReturn(
success=False,
code=400,
message=str(error),
job=None,
)
return GenericJobMutationReturn(
success=True,
code=200,
message="restore job created",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def restore_backup( def restore_backup(
self, self,
@ -173,7 +235,7 @@ class BackupMutations:
job=None, job=None,
) )
service = get_service_by_id(snap.service_name) service = ServiceManager.get_service_by_id(snap.service_name)
if service is None: if service is None:
return GenericJobMutationReturn( return GenericJobMutationReturn(
success=False, success=False,

View file

@ -1,4 +1,5 @@
"""Manipulate jobs""" """Manipulate jobs"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import strawberry import strawberry

View file

@ -1,22 +1,32 @@
"""Services mutations""" """Services mutations"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
import strawberry import strawberry
from selfprivacy_api.utils import pretty_error
from selfprivacy_api.graphql import IsAuthenticated from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.jobs import JobStatus from selfprivacy_api.jobs import JobStatus
from selfprivacy_api.graphql.common_types.service import (
Service,
service_to_graphql_service,
)
from selfprivacy_api.graphql.mutations.mutation_interface import ( from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn, GenericJobMutationReturn,
GenericMutationReturn, GenericMutationReturn,
) )
from selfprivacy_api.graphql.common_types.service import (
Service,
service_to_graphql_service,
)
from selfprivacy_api.services import get_service_by_id from selfprivacy_api.actions.services import (
from selfprivacy_api.utils.block_devices import BlockDevices move_service,
ServiceNotFoundError,
VolumeNotFoundError,
)
from selfprivacy_api.services import ServiceManager
@strawberry.type @strawberry.type
@ -26,6 +36,51 @@ class ServiceMutationReturn(GenericMutationReturn):
service: typing.Optional[Service] = None service: typing.Optional[Service] = None
@strawberry.input
class SetServiceConfigurationInput:
"""Set service configuration input type.
The values might be of different types: str or bool.
"""
service_id: str
configuration: strawberry.scalars.JSON
"""Yes, it is a JSON scalar, which is supposed to be a Map<str, Union[str, int, bool]>.
I can't define it as a proper type because GraphQL doesn't support unions in input types.
There is a @oneOf directive, but it doesn't fit this usecase.
Other option would have been doing something like this:
```python
@strawberry.type
class StringConfigurationInputField:
fieldId: str
value: str
@strawberry.type
class BoolConfigurationInputField:
fieldId: str
value: bool
// ...
@strawberry.input
class SetServiceConfigurationInput:
service_id: str
stringFields: List[StringConfigurationInputField]
boolFields: List[BoolConfigurationInputField]
enumFields: List[EnumConfigurationInputField]
intFields: List[IntConfigurationInputField]
```
But it would be very painful to maintain and will break compatibility with
every change.
Be careful when parsing it. Probably it will be wise to add a parser/validator
later when we get a new Pydantic integration in Strawberry.
-- Inex, 26.07.2024
"""
@strawberry.input @strawberry.input
class MoveServiceInput: class MoveServiceInput:
"""Move service input type.""" """Move service input type."""
@ -49,7 +104,7 @@ class ServicesMutations:
def enable_service(self, service_id: str) -> ServiceMutationReturn: def enable_service(self, service_id: str) -> ServiceMutationReturn:
"""Enable service.""" """Enable service."""
try: try:
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
@ -60,7 +115,7 @@ class ServicesMutations:
except Exception as e: except Exception as e:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
message=format_error(e), message=pretty_error(e),
code=400, code=400,
) )
@ -75,7 +130,7 @@ class ServicesMutations:
def disable_service(self, service_id: str) -> ServiceMutationReturn: def disable_service(self, service_id: str) -> ServiceMutationReturn:
"""Disable service.""" """Disable service."""
try: try:
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
@ -86,7 +141,7 @@ class ServicesMutations:
except Exception as e: except Exception as e:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
message=format_error(e), message=pretty_error(e),
code=400, code=400,
) )
return ServiceMutationReturn( return ServiceMutationReturn(
@ -99,7 +154,7 @@ class ServicesMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def stop_service(self, service_id: str) -> ServiceMutationReturn: def stop_service(self, service_id: str) -> ServiceMutationReturn:
"""Stop service.""" """Stop service."""
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
@ -117,7 +172,7 @@ class ServicesMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def start_service(self, service_id: str) -> ServiceMutationReturn: def start_service(self, service_id: str) -> ServiceMutationReturn:
"""Start service.""" """Start service."""
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
@ -135,7 +190,7 @@ class ServicesMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def restart_service(self, service_id: str) -> ServiceMutationReturn: def restart_service(self, service_id: str) -> ServiceMutationReturn:
"""Restart service.""" """Restart service."""
service = get_service_by_id(service_id) service = ServiceManager.get_service_by_id(service_id)
if service is None: if service is None:
return ServiceMutationReturn( return ServiceMutationReturn(
success=False, success=False,
@ -151,33 +206,69 @@ class ServicesMutations:
) )
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn: def set_service_configuration(
"""Move service.""" self, input: SetServiceConfigurationInput
service = get_service_by_id(input.service_id) ) -> ServiceMutationReturn:
"""Set the new configuration values"""
service = ServiceManager.get_service_by_id(input.service_id)
if service is None: if service is None:
return ServiceJobMutationReturn( return ServiceMutationReturn(
success=False, success=False,
message="Service not found.", message=f"Service does not exist: {input.service_id}",
code=404, code=404,
) )
# TODO: make serviceImmovable and BlockdeviceNotFound exceptions try:
# in the move_to_volume() function and handle them here service.set_configuration(input.configuration)
if not service.is_movable(): return ServiceMutationReturn(
return ServiceJobMutationReturn( success=True,
message="Service configuration updated.",
code=200,
service=service_to_graphql_service(service),
)
except ValueError as e:
return ServiceMutationReturn(
success=False, success=False,
message="Service is not movable.", message=e.args[0],
code=400, code=400,
service=service_to_graphql_service(service), service=service_to_graphql_service(service),
) )
volume = BlockDevices().get_block_device(input.location) except Exception as e:
if volume is None: return ServiceMutationReturn(
return ServiceJobMutationReturn(
success=False, success=False,
message="Volume not found.", message=pretty_error(e),
code=404, code=400,
service=service_to_graphql_service(service), service=service_to_graphql_service(service),
) )
job = service.move_to_volume(volume)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn:
"""Move service."""
# We need a service instance for a reply later
service = ServiceManager.get_service_by_id(input.service_id)
if service is None:
return ServiceJobMutationReturn(
success=False,
message=f"Service does not exist: {input.service_id}",
code=404,
)
try:
job = move_service(input.service_id, input.location)
except (ServiceNotFoundError, VolumeNotFoundError) as e:
return ServiceJobMutationReturn(
success=False,
message=pretty_error(e),
code=404,
)
except Exception as e:
return ServiceJobMutationReturn(
success=False,
message=pretty_error(e),
code=400,
service=service_to_graphql_service(service),
)
if job.status in [JobStatus.CREATED, JobStatus.RUNNING]: if job.status in [JobStatus.CREATED, JobStatus.RUNNING]:
return ServiceJobMutationReturn( return ServiceJobMutationReturn(
success=True, success=True,
@ -197,12 +288,8 @@ class ServicesMutations:
else: else:
return ServiceJobMutationReturn( return ServiceJobMutationReturn(
success=False, success=False,
message=f"Service move failure: {job.status_text}", message=f"While moving service and performing the step '{job.status_text}', error occured: {job.error}",
code=400, code=400,
service=service_to_graphql_service(service), service=service_to_graphql_service(service),
job=job_to_api_job(job), job=job_to_api_job(job),
) )
def format_error(e: Exception) -> str:
return type(e).__name__ + ": " + str(e)

View file

@ -1,4 +1,5 @@
"""Storage devices mutations""" """Storage devices mutations"""
import strawberry import strawberry
from selfprivacy_api.graphql import IsAuthenticated from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job from selfprivacy_api.graphql.common_types.jobs import job_to_api_job

View file

@ -1,15 +1,25 @@
"""System management mutations""" """System management mutations"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
import strawberry import strawberry
from selfprivacy_api.utils import pretty_error
from selfprivacy_api.jobs.nix_collect_garbage import start_nix_collect_garbage
from selfprivacy_api.graphql import IsAuthenticated from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.graphql.queries.providers import DnsProvider
from selfprivacy_api.graphql.mutations.mutation_interface import ( from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericMutationReturn, GenericMutationReturn,
MutationReturnInterface, MutationReturnInterface,
) )
import selfprivacy_api.actions.system as system_actions import selfprivacy_api.actions.system as system_actions
import selfprivacy_api.actions.ssh as ssh_actions import selfprivacy_api.actions.ssh as ssh_actions
from selfprivacy_api.actions.system import set_dns_provider
@strawberry.type @strawberry.type
@ -43,6 +53,14 @@ class SSHSettingsInput:
password_authentication: bool password_authentication: bool
@strawberry.input
class SetDnsProviderInput:
"""Input type to set the provider"""
provider: DnsProvider
api_token: str
@strawberry.input @strawberry.input
class AutoUpgradeSettingsInput: class AutoUpgradeSettingsInput:
"""Input type for auto upgrade settings""" """Input type for auto upgrade settings"""
@ -114,16 +132,17 @@ class SystemMutations:
) )
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_rebuild(self) -> GenericMutationReturn: def run_system_rebuild(self) -> GenericJobMutationReturn:
try: try:
system_actions.rebuild_system() job = system_actions.rebuild_system()
return GenericMutationReturn( return GenericJobMutationReturn(
success=True, success=True,
message="Starting rebuild system", message="Starting system rebuild",
code=200, code=200,
job=job_to_api_job(job),
) )
except system_actions.ShellException as e: except system_actions.ShellException as e:
return GenericMutationReturn( return GenericJobMutationReturn(
success=False, success=False,
message=str(e), message=str(e),
code=500, code=500,
@ -135,7 +154,7 @@ class SystemMutations:
try: try:
return GenericMutationReturn( return GenericMutationReturn(
success=True, success=True,
message="Starting rebuild system", message="Starting system rollback",
code=200, code=200,
) )
except system_actions.ShellException as e: except system_actions.ShellException as e:
@ -146,16 +165,17 @@ class SystemMutations:
) )
@strawberry.mutation(permission_classes=[IsAuthenticated]) @strawberry.mutation(permission_classes=[IsAuthenticated])
def run_system_upgrade(self) -> GenericMutationReturn: def run_system_upgrade(self) -> GenericJobMutationReturn:
system_actions.upgrade_system()
try: try:
return GenericMutationReturn( job = system_actions.upgrade_system()
return GenericJobMutationReturn(
success=True, success=True,
message="Starting rebuild system", message="Starting system upgrade",
code=200, code=200,
job=job_to_api_job(job),
) )
except system_actions.ShellException as e: except system_actions.ShellException as e:
return GenericMutationReturn( return GenericJobMutationReturn(
success=False, success=False,
message=str(e), message=str(e),
code=500, code=500,
@ -191,3 +211,31 @@ class SystemMutations:
message=f"Failed to pull repository changes:\n{result.data}", message=f"Failed to pull repository changes:\n{result.data}",
code=500, code=500,
) )
@strawberry.mutation(permission_classes=[IsAuthenticated])
def nix_collect_garbage(self) -> GenericJobMutationReturn:
job = start_nix_collect_garbage()
return GenericJobMutationReturn(
success=True,
code=200,
message="Garbage collector started...",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def set_dns_provider(self, input: SetDnsProviderInput) -> GenericMutationReturn:
try:
set_dns_provider(input.provider, input.api_token)
return GenericMutationReturn(
success=True,
code=200,
message="Provider set",
)
except Exception as e:
return GenericMutationReturn(
success=False,
code=400,
message=pretty_error(e),
)

View file

@ -1,8 +1,10 @@
"""API access status""" """API access status"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import datetime import datetime
import typing import typing
import strawberry import strawberry
from strawberry.types import Info from strawberry.types import Info
from selfprivacy_api.actions.api_tokens import ( from selfprivacy_api.actions.api_tokens import (
get_api_tokens_with_caller_flag, get_api_tokens_with_caller_flag,

View file

@ -1,4 +1,5 @@
"""Backup""" """Backup"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
import strawberry import strawberry
@ -6,6 +7,7 @@ import strawberry
from selfprivacy_api.backup import Backups from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.local_secret import LocalBackupSecret from selfprivacy_api.backup.local_secret import LocalBackupSecret
from selfprivacy_api.backup.tasks import which_snapshots_to_full_restore
from selfprivacy_api.graphql.queries.providers import BackupProvider from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.service import ( from selfprivacy_api.graphql.common_types.service import (
Service, Service,
@ -14,7 +16,8 @@ from selfprivacy_api.graphql.common_types.service import (
service_to_graphql_service, service_to_graphql_service,
) )
from selfprivacy_api.graphql.common_types.backup import AutobackupQuotas from selfprivacy_api.graphql.common_types.backup import AutobackupQuotas
from selfprivacy_api.services import get_service_by_id from selfprivacy_api.services import ServiceManager
from selfprivacy_api.models.backup.snapshot import Snapshot
@strawberry.type @strawberry.type
@ -34,6 +37,45 @@ class BackupConfiguration:
location_id: typing.Optional[str] location_id: typing.Optional[str]
# TODO: Ideally this should not be done in API but making an internal Service requires more work
# than to make an API record about a service
def tombstone_service(service_id: str) -> Service:
return Service(
id=service_id,
display_name=f"{service_id} (Orphaned)",
description="",
svg_icon="",
is_movable=False,
is_required=False,
is_enabled=False,
status=ServiceStatusEnum.OFF,
url=None,
can_be_backed_up=False,
backup_description="",
is_installed=False,
)
def snapshot_to_api(snap: Snapshot):
api_service = None
service = ServiceManager.get_service_by_id(snap.service_name)
if service is None:
api_service = tombstone_service(snap.service_name)
else:
api_service = service_to_graphql_service(service)
if api_service is None:
raise NotImplementedError(
f"Could not construct API Service record for:{snap.service_name}. This should be unreachable and is a bug if you see it."
)
return SnapshotInfo(
id=snap.id,
service=api_service,
created_at=snap.created_at,
reason=snap.reason,
)
@strawberry.type @strawberry.type
class Backup: class Backup:
@strawberry.field @strawberry.field
@ -52,32 +94,15 @@ class Backup:
def all_snapshots(self) -> typing.List[SnapshotInfo]: def all_snapshots(self) -> typing.List[SnapshotInfo]:
if not Backups.is_initted(): if not Backups.is_initted():
return [] return []
result = []
snapshots = Backups.get_all_snapshots() snapshots = Backups.get_all_snapshots()
for snap in snapshots: return [snapshot_to_api(snap) for snap in snapshots]
service = get_service_by_id(snap.service_name)
if service is None: @strawberry.field
service = Service( def last_slice(self) -> typing.List[SnapshotInfo]:
id=snap.service_name, """
display_name=f"{snap.service_name} (Orphaned)", A query for seeing which snapshots will be restored when migrating
description="", """
svg_icon="",
is_movable=False, if not Backups.is_initted():
is_required=False, return []
is_enabled=False, return [snapshot_to_api(snap) for snap in which_snapshots_to_full_restore()]
status=ServiceStatusEnum.OFF,
url=None,
dns_records=None,
can_be_backed_up=False,
backup_description="",
)
else:
service = service_to_graphql_service(service)
graphql_snap = SnapshotInfo(
id=snap.id,
service=service,
created_at=snap.created_at,
reason=snap.reason,
)
result.append(graphql_snap)
return result

View file

@ -1,4 +1,5 @@
"""Common types and enums used by different types of queries.""" """Common types and enums used by different types of queries."""
from enum import Enum from enum import Enum
import datetime import datetime
import typing import typing

View file

@ -1,24 +1,30 @@
"""Jobs status""" """Jobs status"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing
import strawberry import strawberry
from typing import List, Optional
from selfprivacy_api.jobs import Jobs
from selfprivacy_api.graphql.common_types.jobs import ( from selfprivacy_api.graphql.common_types.jobs import (
ApiJob, ApiJob,
get_api_job_by_id, get_api_job_by_id,
job_to_api_job, job_to_api_job,
) )
from selfprivacy_api.jobs import Jobs
def get_all_jobs() -> List[ApiJob]:
jobs = Jobs.get_jobs()
api_jobs = [job_to_api_job(job) for job in jobs]
assert api_jobs is not None
return api_jobs
@strawberry.type @strawberry.type
class Job: class Job:
@strawberry.field @strawberry.field
def get_jobs(self) -> typing.List[ApiJob]: def get_jobs(self) -> List[ApiJob]:
Jobs.get_jobs() return get_all_jobs()
return [job_to_api_job(job) for job in Jobs.get_jobs()]
@strawberry.field @strawberry.field
def get_job(self, job_id: str) -> typing.Optional[ApiJob]: def get_job(self, job_id: str) -> Optional[ApiJob]:
return get_api_job_by_id(job_id) return get_api_job_by_id(job_id)

View file

@ -0,0 +1,99 @@
"""System logs"""
from datetime import datetime
import typing
import strawberry
from selfprivacy_api.utils.systemd_journal import get_paginated_logs
@strawberry.type
class LogEntry:
message: str = strawberry.field()
timestamp: datetime = strawberry.field()
priority: typing.Optional[int] = strawberry.field()
systemd_unit: typing.Optional[str] = strawberry.field()
systemd_slice: typing.Optional[str] = strawberry.field()
def __init__(self, journal_entry: typing.Dict):
self.entry = journal_entry
self.message = journal_entry["MESSAGE"]
self.timestamp = journal_entry["__REALTIME_TIMESTAMP"]
self.priority = journal_entry.get("PRIORITY")
self.systemd_unit = journal_entry.get("_SYSTEMD_UNIT")
self.systemd_slice = journal_entry.get("_SYSTEMD_SLICE")
@strawberry.field()
def cursor(self) -> str:
return self.entry["__CURSOR"]
@strawberry.type
class LogsPageMeta:
up_cursor: typing.Optional[str] = strawberry.field()
down_cursor: typing.Optional[str] = strawberry.field()
def __init__(
self, up_cursor: typing.Optional[str], down_cursor: typing.Optional[str]
):
self.up_cursor = up_cursor
self.down_cursor = down_cursor
@strawberry.type
class PaginatedEntries:
page_meta: LogsPageMeta = strawberry.field(
description="Metadata to aid in pagination."
)
entries: typing.List[LogEntry] = strawberry.field(
description="The list of log entries."
)
def __init__(self, meta: LogsPageMeta, entries: typing.List[LogEntry]):
self.page_meta = meta
self.entries = entries
@staticmethod
def from_entries(entries: typing.List[LogEntry]):
if entries == []:
return PaginatedEntries(LogsPageMeta(None, None), [])
return PaginatedEntries(
LogsPageMeta(
entries[0].cursor(),
entries[-1].cursor(),
),
entries,
)
@strawberry.type
class Logs:
@strawberry.field()
def paginated(
self,
limit: int = 20,
# All entries returned will be lesser than this cursor. Sets upper bound on results.
up_cursor: str | None = None,
# All entries returned will be greater than this cursor. Sets lower bound on results.
down_cursor: str | None = None,
# All entries will be from a specific systemd slice
filterBySlice: str | None = None,
# All entries will be from a specific systemd unit
filterByUnit: str | None = None,
) -> PaginatedEntries:
if limit > 50:
raise Exception("You can't fetch more than 50 entries via single request.")
return PaginatedEntries.from_entries(
list(
map(
lambda x: LogEntry(x),
get_paginated_logs(
limit,
up_cursor,
down_cursor,
filterBySlice,
filterByUnit,
),
)
)
)

View file

@ -0,0 +1,120 @@
import strawberry
from typing import Optional
from datetime import datetime
from selfprivacy_api.models.services import ServiceStatus
from selfprivacy_api.services.prometheus import Prometheus
from selfprivacy_api.utils.monitoring import (
MonitoringQueries,
MonitoringQueryError,
MonitoringValuesResult,
MonitoringMetricsResult,
)
@strawberry.type
class CpuMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringValuesResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.cpu_usage_overall(self.start, self.end, self.step)
@strawberry.type
class MemoryMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringValuesResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.memory_usage_overall(self.start, self.end, self.step)
@strawberry.field
def average_usage_by_service(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.memory_usage_average_by_slice(self.start, self.end)
@strawberry.field
def max_usage_by_service(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.memory_usage_max_by_slice(self.start, self.end)
@strawberry.type
class DiskMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.disk_usage_overall(self.start, self.end, self.step)
@strawberry.type
class NetworkMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.network_usage_overall(self.start, self.end, self.step)
@strawberry.type
class Monitoring:
@strawberry.field
def cpu_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> CpuMonitoring:
return CpuMonitoring(start=start, end=end, step=step)
@strawberry.field
def memory_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> MemoryMonitoring:
return MemoryMonitoring(start=start, end=end, step=step)
@strawberry.field
def disk_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> DiskMonitoring:
return DiskMonitoring(start=start, end=end, step=step)
@strawberry.field
def network_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> NetworkMonitoring:
return NetworkMonitoring(start=start, end=end, step=step)

View file

@ -1,4 +1,5 @@
"""Enums representing different service providers.""" """Enums representing different service providers."""
from enum import Enum from enum import Enum
import strawberry import strawberry
@ -14,6 +15,7 @@ class DnsProvider(Enum):
class ServerProvider(Enum): class ServerProvider(Enum):
HETZNER = "HETZNER" HETZNER = "HETZNER"
DIGITALOCEAN = "DIGITALOCEAN" DIGITALOCEAN = "DIGITALOCEAN"
OTHER = "OTHER"
@strawberry.enum @strawberry.enum

View file

@ -1,4 +1,5 @@
"""Services status""" """Services status"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
import strawberry import strawberry
@ -7,12 +8,12 @@ from selfprivacy_api.graphql.common_types.service import (
Service, Service,
service_to_graphql_service, service_to_graphql_service,
) )
from selfprivacy_api.services import get_all_services from selfprivacy_api.services import ServiceManager
@strawberry.type @strawberry.type
class Services: class Services:
@strawberry.field @strawberry.field
def all_services(self) -> typing.List[Service]: def all_services(self) -> typing.List[Service]:
services = get_all_services() services = ServiceManager.get_all_services()
return [service_to_graphql_service(service) for service in services] return [service_to_graphql_service(service) for service in services]

View file

@ -1,4 +1,5 @@
"""Storage queries.""" """Storage queries."""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
import strawberry import strawberry
@ -18,9 +19,11 @@ class Storage:
"""Get list of volumes""" """Get list of volumes"""
return [ return [
StorageVolume( StorageVolume(
total_space=str(volume.fssize) total_space=(
if volume.fssize is not None str(volume.fssize)
else str(volume.size), if volume.fssize is not None
else str(volume.size)
),
free_space=str(volume.fsavail), free_space=str(volume.fsavail),
used_space=str(volume.fsused), used_space=str(volume.fsused),
root=volume.is_root(), root=volume.is_root(),

View file

@ -1,15 +1,17 @@
"""Common system information and settings""" """Common system information and settings"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import os import os
import typing import typing
import strawberry import strawberry
from selfprivacy_api.graphql.common_types.dns import DnsRecord from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.graphql.queries.common import Alert, Severity from selfprivacy_api.graphql.queries.common import Alert, Severity
from selfprivacy_api.graphql.queries.providers import DnsProvider, ServerProvider from selfprivacy_api.graphql.queries.providers import DnsProvider, ServerProvider
from selfprivacy_api.jobs import Jobs from selfprivacy_api.jobs import Jobs
from selfprivacy_api.jobs.migrate_to_binds import is_bind_migrated from selfprivacy_api.jobs.migrate_to_binds import is_bind_migrated
from selfprivacy_api.services import get_all_required_dns_records from selfprivacy_api.services import ServiceManager
from selfprivacy_api.utils import ReadUserData from selfprivacy_api.utils import ReadUserData
import selfprivacy_api.actions.system as system_actions import selfprivacy_api.actions.system as system_actions
import selfprivacy_api.actions.ssh as ssh_actions import selfprivacy_api.actions.ssh as ssh_actions
@ -35,7 +37,7 @@ class SystemDomainInfo:
priority=record.priority, priority=record.priority,
display_name=record.display_name, display_name=record.display_name,
) )
for record in get_all_required_dns_records() for record in ServiceManager.get_all_required_dns_records()
] ]
@ -156,8 +158,8 @@ class System:
) )
) )
domain_info: SystemDomainInfo = strawberry.field(resolver=get_system_domain_info) domain_info: SystemDomainInfo = strawberry.field(resolver=get_system_domain_info)
settings: SystemSettings = SystemSettings() settings: SystemSettings = strawberry.field(default_factory=SystemSettings)
info: SystemInfo = SystemInfo() info: SystemInfo = strawberry.field(default_factory=SystemInfo)
provider: SystemProviderInfo = strawberry.field(resolver=get_system_provider_info) provider: SystemProviderInfo = strawberry.field(resolver=get_system_provider_info)
@strawberry.field @strawberry.field

View file

@ -1,4 +1,5 @@
"""Users""" """Users"""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import typing import typing
import strawberry import strawberry

View file

@ -1,9 +1,12 @@
"""GraphQL API for SelfPrivacy.""" """GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods # pylint: disable=too-few-public-methods
import asyncio import asyncio
from typing import AsyncGenerator from typing import AsyncGenerator, List
import strawberry import strawberry
from strawberry.types import Info
from selfprivacy_api.graphql import IsAuthenticated from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.deprecated_mutations import ( from selfprivacy_api.graphql.mutations.deprecated_mutations import (
DeprecatedApiMutations, DeprecatedApiMutations,
@ -24,9 +27,23 @@ from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.queries.api_queries import Api from selfprivacy_api.graphql.queries.api_queries import Api
from selfprivacy_api.graphql.queries.backup import Backup from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.queries.jobs import Job from selfprivacy_api.graphql.queries.jobs import Job
from selfprivacy_api.graphql.queries.logs import LogEntry, Logs
from selfprivacy_api.graphql.queries.services import Services from selfprivacy_api.graphql.queries.services import Services
from selfprivacy_api.graphql.queries.storage import Storage from selfprivacy_api.graphql.queries.storage import Storage
from selfprivacy_api.graphql.queries.system import System from selfprivacy_api.graphql.queries.system import System
from selfprivacy_api.graphql.queries.monitoring import Monitoring
from selfprivacy_api.graphql.subscriptions.jobs import ApiJob
from selfprivacy_api.graphql.subscriptions.jobs import (
job_updates as job_update_generator,
)
from selfprivacy_api.graphql.subscriptions.logs import log_stream
from selfprivacy_api.graphql.common_types.service import (
StringConfigItem,
BoolConfigItem,
EnumConfigItem,
)
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
from selfprivacy_api.graphql.queries.users import Users from selfprivacy_api.graphql.queries.users import Users
@ -47,6 +64,11 @@ class Query:
"""System queries""" """System queries"""
return System() return System()
@strawberry.field(permission_classes=[IsAuthenticated])
def logs(self) -> Logs:
"""Log queries"""
return Logs()
@strawberry.field(permission_classes=[IsAuthenticated]) @strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> Users: def users(self) -> Users:
"""Users queries""" """Users queries"""
@ -72,6 +94,11 @@ class Query:
"""Backup queries""" """Backup queries"""
return Backup() return Backup()
@strawberry.field(permission_classes=[IsAuthenticated])
def monitoring(self) -> Monitoring:
"""Monitoring queries"""
return Monitoring()
@strawberry.type @strawberry.type
class Mutation( class Mutation(
@ -129,22 +156,50 @@ class Mutation(
code=200, code=200,
) )
pass
# A cruft for Websockets
def authenticated(info: Info) -> bool:
return IsAuthenticated().has_permission(source=None, info=info)
def reject_if_unauthenticated(info: Info):
if not authenticated(info):
raise Exception(IsAuthenticated().message)
@strawberry.type @strawberry.type
class Subscription: class Subscription:
"""Root schema for subscriptions""" """Root schema for subscriptions.
Every field here should be an AsyncIterator or AsyncGenerator
It is not a part of the spec but graphql-core (dep of strawberryql)
demands it while the spec is vague in this area."""
@strawberry.subscription(permission_classes=[IsAuthenticated]) @strawberry.subscription
async def count(self, target: int = 100) -> AsyncGenerator[int, None]: async def job_updates(self, info: Info) -> AsyncGenerator[List[ApiJob], None]:
for i in range(target): reject_if_unauthenticated(info)
return job_update_generator()
@strawberry.subscription
# Used for testing, consider deletion to shrink attack surface
async def count(self, info: Info) -> AsyncGenerator[int, None]:
reject_if_unauthenticated(info)
for i in range(10):
yield i yield i
await asyncio.sleep(0.5) await asyncio.sleep(0.5)
@strawberry.subscription
async def log_entries(self, info: Info) -> AsyncGenerator[LogEntry, None]:
reject_if_unauthenticated(info)
return log_stream()
schema = strawberry.Schema( schema = strawberry.Schema(
query=Query, query=Query,
mutation=Mutation, mutation=Mutation,
subscription=Subscription, subscription=Subscription,
types=[
StringConfigItem,
BoolConfigItem,
EnumConfigItem,
],
) )

View file

@ -0,0 +1,14 @@
# pylint: disable=too-few-public-methods
from typing import AsyncGenerator, List
from selfprivacy_api.jobs import job_notifications
from selfprivacy_api.graphql.common_types.jobs import ApiJob
from selfprivacy_api.graphql.queries.jobs import get_all_jobs
async def job_updates() -> AsyncGenerator[List[ApiJob], None]:
# Send the complete list of jobs every time anything gets updated
async for notification in job_notifications():
yield get_all_jobs()

View file

@ -0,0 +1,31 @@
from typing import AsyncGenerator
from systemd import journal
import asyncio
from selfprivacy_api.graphql.queries.logs import LogEntry
async def log_stream() -> AsyncGenerator[LogEntry, None]:
j = journal.Reader()
j.seek_tail()
j.get_previous()
queue = asyncio.Queue()
async def callback():
if j.process() != journal.APPEND:
return
for entry in j:
await queue.put(entry)
asyncio.get_event_loop().add_reader(j, lambda: asyncio.ensure_future(callback()))
while True:
entry = await queue.get()
try:
yield LogEntry(entry)
except Exception:
asyncio.get_event_loop().remove_reader(j)
return
queue.task_done()

View file

@ -14,7 +14,9 @@ A job is a dictionary with the following keys:
- error: error message if the job failed - error: error message if the job failed
- result: result of the job - result: result of the job
""" """
import typing import typing
import asyncio
import datetime import datetime
from uuid import UUID from uuid import UUID
import uuid import uuid
@ -23,6 +25,7 @@ from enum import Enum
from pydantic import BaseModel from pydantic import BaseModel
from selfprivacy_api.utils.redis_pool import RedisPool from selfprivacy_api.utils.redis_pool import RedisPool
from selfprivacy_api.utils.redis_model_storage import store_model_as_hash
JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days
@ -102,7 +105,7 @@ class Jobs:
result=None, result=None,
) )
redis = RedisPool().get_connection() redis = RedisPool().get_connection()
_store_job_as_hash(redis, _redis_key_from_uuid(job.uid), job) store_model_as_hash(redis, _redis_key_from_uuid(job.uid), job)
return job return job
@staticmethod @staticmethod
@ -218,7 +221,7 @@ class Jobs:
redis = RedisPool().get_connection() redis = RedisPool().get_connection()
key = _redis_key_from_uuid(job.uid) key = _redis_key_from_uuid(job.uid)
if redis.exists(key): if redis.exists(key):
_store_job_as_hash(redis, key, job) store_model_as_hash(redis, key, job)
if status in (JobStatus.FINISHED, JobStatus.ERROR): if status in (JobStatus.FINISHED, JobStatus.ERROR):
redis.expire(key, JOB_EXPIRATION_SECONDS) redis.expire(key, JOB_EXPIRATION_SECONDS)
@ -268,6 +271,20 @@ class Jobs:
return False return False
def report_progress(progress: int, job: Job, status_text: str) -> None:
"""
A terse way to call a common operation, for readability
job.report_progress() would be even better
but it would go against how this file is written
"""
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text=status_text,
progress=progress,
)
def _redis_key_from_uuid(uuid_string) -> str: def _redis_key_from_uuid(uuid_string) -> str:
return "jobs:" + str(uuid_string) return "jobs:" + str(uuid_string)
@ -280,17 +297,6 @@ def _progress_log_key_from_uuid(uuid_string) -> str:
return PROGRESS_LOGS_PREFIX + str(uuid_string) return PROGRESS_LOGS_PREFIX + str(uuid_string)
def _store_job_as_hash(redis, redis_key, model) -> None:
for key, value in model.dict().items():
if isinstance(value, uuid.UUID):
value = str(value)
if isinstance(value, datetime.datetime):
value = value.isoformat()
if isinstance(value, JobStatus):
value = value.value
redis.hset(redis_key, key, str(value))
def _job_from_hash(redis, redis_key) -> typing.Optional[Job]: def _job_from_hash(redis, redis_key) -> typing.Optional[Job]:
if redis.exists(redis_key): if redis.exists(redis_key):
job_dict = redis.hgetall(redis_key) job_dict = redis.hgetall(redis_key)
@ -307,3 +313,15 @@ def _job_from_hash(redis, redis_key) -> typing.Optional[Job]:
return Job(**job_dict) return Job(**job_dict)
return None return None
async def job_notifications() -> typing.AsyncGenerator[dict, None]:
channel = await RedisPool().subscribe_to_keys("jobs:*")
while True:
try:
# we cannot timeout here because we do not know when the next message is supposed to arrive
message: dict = await channel.get_message(ignore_subscribe_messages=True, timeout=None) # type: ignore
if message is not None:
yield message
except GeneratorExit:
break

View file

@ -1,4 +1,5 @@
"""Function to perform migration of app data to binds.""" """Function to perform migration of app data to binds."""
import subprocess import subprocess
import pathlib import pathlib
import shutil import shutil
@ -6,7 +7,7 @@ import shutil
from pydantic import BaseModel from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.bitwarden import Bitwarden from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea from selfprivacy_api.services.forgejo import Forgejo
from selfprivacy_api.services.mailserver import MailServer from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma from selfprivacy_api.services.pleroma import Pleroma
@ -67,8 +68,8 @@ def move_folder(
try: try:
data_path.mkdir(mode=0o750, parents=True, exist_ok=True) data_path.mkdir(mode=0o750, parents=True, exist_ok=True)
except Exception as e: except Exception as error:
print(f"Error creating data path: {e}") print(f"Error creating data path: {error}")
return return
try: try:
@ -230,7 +231,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
status_text="Migrating Gitea.", status_text="Migrating Gitea.",
) )
Gitea().stop() Forgejo().stop()
if not pathlib.Path("/volumes/sda1/gitea").exists(): if not pathlib.Path("/volumes/sda1/gitea").exists():
if not pathlib.Path("/volumes/sdb/gitea").exists(): if not pathlib.Path("/volumes/sdb/gitea").exists():
@ -241,7 +242,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
group="gitea", group="gitea",
) )
Gitea().start() Forgejo().start()
# Perform migration of Mail server # Perform migration of Mail server

View file

@ -0,0 +1,153 @@
import re
import subprocess
from typing import Tuple, Iterable
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import JobStatus, Jobs, Job
class ShellException(Exception):
"""Shell-related errors"""
COMPLETED_WITH_ERROR = "Error occurred, please report this to the support chat."
RESULT_WAS_NOT_FOUND_ERROR = (
"We are sorry, garbage collection result was not found. "
"Something went wrong, please report this to the support chat."
)
CLEAR_COMPLETED = "Garbage collection completed."
def delete_old_gens_and_return_dead_report() -> str:
subprocess.run(
[
"nix-env",
"-p",
"/nix/var/nix/profiles/system",
"--delete-generations",
"old",
],
check=False,
)
result = subprocess.check_output(["nix-store", "--gc", "--print-dead"]).decode(
"utf-8"
)
return " " if result is None else result
def run_nix_collect_garbage() -> Iterable[bytes]:
process = subprocess.Popen(
["nix-store", "--gc"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
return process.stdout if process.stdout else iter([])
def parse_line(job: Job, line: str) -> Job:
"""
We parse the string for the presence of a final line,
with the final amount of space cleared.
Simply put, we're just looking for a similar string:
"1537 store paths deleted, 339.84 MiB freed".
"""
pattern = re.compile(r"[+-]?\d+\.\d+ \w+(?= freed)")
match = re.search(pattern, line)
if match is None:
raise ShellException("nix returned gibberish output")
else:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
status_text=CLEAR_COMPLETED,
result=f"{match.group(0)} have been cleared",
)
return job
def process_stream(job: Job, stream: Iterable[bytes], total_dead_packages: int) -> None:
completed_packages = 0
prev_progress = 0
for line in stream:
line = line.decode("utf-8")
if "deleting '/nix/store/" in line:
completed_packages += 1
percent = int((completed_packages / total_dead_packages) * 100)
if percent - prev_progress >= 5:
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=percent,
status_text="Cleaning...",
)
prev_progress = percent
elif "store paths deleted," in line:
parse_line(job, line)
def get_dead_packages(output) -> Tuple[int, float]:
dead = len(re.findall("/nix/store/", output))
percent = 0
if dead != 0:
percent = 100 / dead
return dead, percent
@huey.task()
def calculate_and_clear_dead_paths(job: Job):
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=0,
status_text="Calculate the number of dead packages...",
)
dead_packages, package_equal_to_percent = get_dead_packages(
delete_old_gens_and_return_dead_report()
)
if dead_packages == 0:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
status_text="Nothing to clear",
result="System is clear",
)
return True
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=0,
status_text=f"Found {dead_packages} packages to remove!",
)
stream = run_nix_collect_garbage()
try:
process_stream(job, stream, dead_packages)
except ShellException as error:
Jobs.update(
job=job,
status=JobStatus.ERROR,
status_text=COMPLETED_WITH_ERROR,
error=RESULT_WAS_NOT_FOUND_ERROR,
)
def start_nix_collect_garbage() -> Job:
job = Jobs.add(
type_id="maintenance.collect_nix_garbage",
name="Collect garbage",
description="Cleaning up unused packages",
)
calculate_and_clear_dead_paths(job=job)
return job

View file

@ -0,0 +1,137 @@
"""
A task to start the system upgrade or rebuild by starting a systemd unit.
After starting, track the status of the systemd unit and update the Job
status accordingly.
"""
import subprocess
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import JobStatus, Jobs, Job
from selfprivacy_api.utils.waitloop import wait_until_true
from selfprivacy_api.utils.systemd import (
get_service_status,
get_last_log_lines,
ServiceStatus,
)
START_TIMEOUT = 60 * 5
START_INTERVAL = 1
RUN_TIMEOUT = 60 * 60
RUN_INTERVAL = 5
def check_if_started(unit_name: str):
"""Check if the systemd unit has started"""
try:
status = get_service_status(unit_name)
if status == ServiceStatus.ACTIVE:
return True
return False
except subprocess.CalledProcessError:
return False
def check_running_status(job: Job, unit_name: str):
"""Check if the systemd unit is running"""
try:
status = get_service_status(unit_name)
if status == ServiceStatus.INACTIVE:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result="System rebuilt.",
progress=100,
)
return True
if status == ServiceStatus.FAILED:
log_lines = get_last_log_lines(unit_name, 10)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="System rebuild failed. Last log lines:\n" + "\n".join(log_lines),
)
return True
if status == ServiceStatus.ACTIVE:
log_lines = get_last_log_lines(unit_name, 1)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text=log_lines[0] if len(log_lines) > 0 else "",
)
return False
return False
except subprocess.CalledProcessError:
return False
def rebuild_system(job: Job, upgrade: bool = False):
"""
Broken out to allow calling it synchronously.
We cannot just block until task is done because it will require a second worker
Which we do not have
"""
unit_name = "sp-nixos-upgrade.service" if upgrade else "sp-nixos-rebuild.service"
try:
command = ["systemctl", "start", unit_name]
subprocess.run(
command,
check=True,
start_new_session=True,
shell=False,
)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text="Starting the system rebuild...",
)
# Wait for the systemd unit to start
try:
wait_until_true(
lambda: check_if_started(unit_name),
timeout_sec=START_TIMEOUT,
interval=START_INTERVAL,
)
except TimeoutError:
log_lines = get_last_log_lines(unit_name, 10)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="System rebuild timed out. Last log lines:\n"
+ "\n".join(log_lines),
)
return
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text="Rebuilding the system...",
)
# Wait for the systemd unit to finish
try:
wait_until_true(
lambda: check_running_status(job, unit_name),
timeout_sec=RUN_TIMEOUT,
interval=RUN_INTERVAL,
)
except TimeoutError:
log_lines = get_last_log_lines(unit_name, 10)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="System rebuild timed out. Last log lines:\n"
+ "\n".join(log_lines),
)
return
except subprocess.CalledProcessError as e:
Jobs.update(
job=job,
status=JobStatus.ERROR,
status_text=str(e),
)
@huey.task()
def rebuild_system_task(job: Job, upgrade: bool = False):
"""Rebuild the system"""
rebuild_system(job, upgrade)

View file

@ -11,9 +11,17 @@ Adding DISABLE_ALL to that array disables the migrations module entirely.
from selfprivacy_api.utils import ReadUserData, UserDataFiles from selfprivacy_api.utils import ReadUserData, UserDataFiles
from selfprivacy_api.migrations.write_token_to_redis import WriteTokenToRedis from selfprivacy_api.migrations.write_token_to_redis import WriteTokenToRedis
from selfprivacy_api.migrations.check_for_system_rebuild_jobs import (
CheckForSystemRebuildJobs,
)
from selfprivacy_api.migrations.add_roundcube import AddRoundcube
from selfprivacy_api.migrations.add_monitoring import AddMonitoring
migrations = [ migrations = [
WriteTokenToRedis(), WriteTokenToRedis(),
CheckForSystemRebuildJobs(),
AddMonitoring(),
AddRoundcube(),
] ]

View file

@ -0,0 +1,37 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.services.flake_service_manager import FlakeServiceManager
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.block_devices import BlockDevices
class AddMonitoring(Migration):
"""Adds monitoring service if it is not present."""
def get_migration_name(self) -> str:
return "add_monitoring"
def get_migration_description(self) -> str:
return "Adds the Monitoring if it is not present."
def is_migration_needed(self) -> bool:
with FlakeServiceManager() as manager:
if "monitoring" not in manager.services:
return True
with ReadUserData() as data:
if "monitoring" not in data["modules"]:
return True
return False
def migrate(self) -> None:
with FlakeServiceManager() as manager:
if "monitoring" not in manager.services:
manager.services["monitoring"] = (
"git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes&dir=sp-modules/monitoring"
)
with WriteUserData() as data:
if "monitoring" not in data["modules"]:
data["modules"]["monitoring"] = {
"enable": True,
"location": BlockDevices().get_root_block_device().name,
}

View file

@ -0,0 +1,27 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.services.flake_service_manager import FlakeServiceManager
from selfprivacy_api.utils import ReadUserData, WriteUserData
class AddRoundcube(Migration):
"""Adds the Roundcube if it is not present."""
def get_migration_name(self) -> str:
return "add_roundcube"
def get_migration_description(self) -> str:
return "Adds the Roundcube if it is not present."
def is_migration_needed(self) -> bool:
with FlakeServiceManager() as manager:
if "roundcube" not in manager.services:
return True
return False
def migrate(self) -> None:
with FlakeServiceManager() as manager:
if "roundcube" not in manager.services:
manager.services["roundcube"] = (
"git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes&dir=sp-modules/roundcube"
)

View file

@ -0,0 +1,48 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.jobs import JobStatus, Jobs
class CheckForSystemRebuildJobs(Migration):
"""Check if there are unfinished system rebuild jobs and finish them"""
def get_migration_name(self) -> str:
return "check_for_system_rebuild_jobs"
def get_migration_description(self) -> str:
return "Check if there are unfinished system rebuild jobs and finish them"
def is_migration_needed(self) -> bool:
# Check if there are any unfinished system rebuild jobs
for job in Jobs.get_jobs():
if (
job.type_id
in [
"system.nixos.rebuild",
"system.nixos.upgrade",
]
) and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
return True
return False
def migrate(self) -> None:
# As the API is restarted, we assume that the jobs are finished
for job in Jobs.get_jobs():
if (
job.type_id
in [
"system.nixos.rebuild",
"system.nixos.upgrade",
]
) and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result="System rebuilt.",
progress=100,
)

View file

@ -12,17 +12,17 @@ class Migration(ABC):
""" """
@abstractmethod @abstractmethod
def get_migration_name(self): def get_migration_name(self) -> str:
pass pass
@abstractmethod @abstractmethod
def get_migration_description(self): def get_migration_description(self) -> str:
pass pass
@abstractmethod @abstractmethod
def is_migration_needed(self): def is_migration_needed(self) -> bool:
pass pass
@abstractmethod @abstractmethod
def migrate(self): def migrate(self) -> None:
pass pass

View file

@ -15,10 +15,10 @@ from selfprivacy_api.utils import ReadUserData, UserDataFiles
class WriteTokenToRedis(Migration): class WriteTokenToRedis(Migration):
"""Load Json tokens into Redis""" """Load Json tokens into Redis"""
def get_migration_name(self): def get_migration_name(self) -> str:
return "write_token_to_redis" return "write_token_to_redis"
def get_migration_description(self): def get_migration_description(self) -> str:
return "Loads the initial token into redis token storage" return "Loads the initial token into redis token storage"
def is_repo_empty(self, repo: AbstractTokensRepository) -> bool: def is_repo_empty(self, repo: AbstractTokensRepository) -> bool:
@ -38,7 +38,7 @@ class WriteTokenToRedis(Migration):
print(e) print(e)
return None return None
def is_migration_needed(self): def is_migration_needed(self) -> bool:
try: try:
if self.get_token_from_json() is not None and self.is_repo_empty( if self.get_token_from_json() is not None and self.is_repo_empty(
RedisTokensRepository() RedisTokensRepository()
@ -47,8 +47,9 @@ class WriteTokenToRedis(Migration):
except Exception as e: except Exception as e:
print(e) print(e)
return False return False
return False
def migrate(self): def migrate(self) -> None:
# Write info about providers to userdata.json # Write info about providers to userdata.json
try: try:
token = self.get_token_from_json() token = self.get_token_from_json()

View file

@ -0,0 +1,24 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel
class ServiceStatus(Enum):
"""Enum for service status"""
ACTIVE = "ACTIVE"
RELOADING = "RELOADING"
INACTIVE = "INACTIVE"
FAILED = "FAILED"
ACTIVATING = "ACTIVATING"
DEACTIVATING = "DEACTIVATING"
OFF = "OFF"
class ServiceDnsRecord(BaseModel):
type: str
name: str
content: str
ttl: int
display_name: str
priority: Optional[int] = None

View file

@ -1,6 +1,7 @@
""" """
New device key used to obtain access token. New device key used to obtain access token.
""" """
from datetime import datetime, timedelta, timezone from datetime import datetime, timedelta, timezone
import secrets import secrets
from pydantic import BaseModel from pydantic import BaseModel

View file

@ -3,6 +3,7 @@ Recovery key used to obtain access token.
Recovery key has a token string, date of creation, optional date of expiration and optional count of uses left. Recovery key has a token string, date of creation, optional date of expiration and optional count of uses left.
""" """
from datetime import datetime, timezone from datetime import datetime, timezone
import secrets import secrets
from typing import Optional from typing import Optional

View file

@ -3,6 +3,7 @@ Model of the access token.
Access token has a token string, device name and date of creation. Access token has a token string, device name and date of creation.
""" """
from datetime import datetime from datetime import datetime
import secrets import secrets
from pydantic import BaseModel from pydantic import BaseModel

View file

@ -1,6 +1,7 @@
""" """
Token repository using Redis as backend. Token repository using Redis as backend.
""" """
from typing import Any, Optional from typing import Any, Optional
from datetime import datetime from datetime import datetime
from hashlib import md5 from hashlib import md5
@ -30,7 +31,7 @@ class RedisTokensRepository(AbstractTokensRepository):
@staticmethod @staticmethod
def token_key_for_device(device_name: str): def token_key_for_device(device_name: str):
md5_hash = md5() md5_hash = md5(usedforsecurity=False)
md5_hash.update(bytes(device_name, "utf-8")) md5_hash.update(bytes(device_name, "utf-8"))
digest = md5_hash.hexdigest() digest = md5_hash.hexdigest()
return TOKENS_PREFIX + digest return TOKENS_PREFIX + digest

View file

@ -1,69 +1,263 @@
"""Services module.""" """Services module."""
import base64
import typing import typing
from typing import List
from os import path, remove
from os import makedirs
from os import listdir
from os.path import join
from selfprivacy_api.services.bitwarden import Bitwarden from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea from selfprivacy_api.services.forgejo import Forgejo
from selfprivacy_api.services.jitsimeet import JitsiMeet from selfprivacy_api.services.jitsimeet import JitsiMeet
from selfprivacy_api.services.prometheus import Prometheus
from selfprivacy_api.services.roundcube import Roundcube
from selfprivacy_api.services.mailserver import MailServer from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.services.ocserv import Ocserv from selfprivacy_api.services.ocserv import Ocserv
from selfprivacy_api.services.service import Service, ServiceDnsRecord from selfprivacy_api.services.service import Service, ServiceDnsRecord
from selfprivacy_api.services.service import ServiceStatus
import selfprivacy_api.utils.network as network_utils import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.api_icon import API_ICON
from selfprivacy_api.utils import USERDATA_FILE, DKIM_DIR, SECRETS_FILE, get_domain
from selfprivacy_api.utils.block_devices import BlockDevices
from shutil import copyfile, copytree, rmtree
CONFIG_STASH_DIR = "/etc/selfprivacy/dump"
class ServiceManager(Service):
folders: List[str] = [CONFIG_STASH_DIR]
@staticmethod
def get_all_services() -> list[Service]:
return services
@staticmethod
def get_service_by_id(service_id: str) -> typing.Optional[Service]:
for service in services:
if service.get_id() == service_id:
return service
return None
@staticmethod
def get_enabled_services() -> list[Service]:
return [service for service in services if service.is_enabled()]
# This one is not currently used by any code.
@staticmethod
def get_disabled_services() -> list[Service]:
return [service for service in services if not service.is_enabled()]
@staticmethod
def get_services_by_location(location: str) -> list[Service]:
return [service for service in services if service.get_drive() == location]
@staticmethod
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
dns_records: list[ServiceDnsRecord] = [
ServiceDnsRecord(
type="A",
name="api",
content=ip4,
ttl=3600,
display_name="SelfPrivacy API",
),
]
if ip6 is not None:
dns_records.append(
ServiceDnsRecord(
type="AAAA",
name="api",
content=ip6,
ttl=3600,
display_name="SelfPrivacy API (IPv6)",
)
)
for service in ServiceManager.get_enabled_services():
dns_records += service.get_dns_records(ip4, ip6)
return dns_records
@staticmethod
def get_id() -> str:
"""Return service id."""
return "api"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Selfprivacy API"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "A proto-service for API itself. Currently manages backups of settings."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
# return ""
return base64.b64encode(API_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
subdomain = ServiceManager.get_subdomain()
return f"https://{subdomain}.{domain}" if subdomain else None
@staticmethod
def get_subdomain() -> typing.Optional[str]:
return "api"
@staticmethod
def is_always_active() -> bool:
return True
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return True
@staticmethod
def is_enabled() -> bool:
return True
@staticmethod
def get_backup_description() -> str:
return "How did we get here?"
@classmethod
def get_status(cls) -> ServiceStatus:
return ServiceStatus.ACTIVE
@classmethod
def can_be_backed_up(cls) -> bool:
"""`True` if the service can be backed up."""
return True
@classmethod
def merge_settings(cls):
# For now we will just copy settings EXCEPT the locations of services
# Stash locations as they are set by user right now
locations = {}
for service in services:
locations[service.get_id()] = service.get_drive()
# Copy files
for p in [USERDATA_FILE, SECRETS_FILE, DKIM_DIR]:
cls.retrieve_stashed_path(p)
# Pop locations
for service in services:
device = BlockDevices().get_block_device(locations[service.get_id()])
if device is not None:
service.set_location(device)
@classmethod
def stop(cls):
"""
We are always active
"""
raise ValueError("tried to stop an always active service")
@classmethod
def start(cls):
"""
We are always active
"""
pass
@classmethod
def restart(cls):
"""
We are always active
"""
pass
@staticmethod
def get_logs():
# TODO: maybe return the logs for api itself
return ""
@classmethod
def get_drive(cls) -> str:
return BlockDevices().get_root_block_device().name
@classmethod
def get_folders(cls) -> List[str]:
return cls.folders
@classmethod
def stash_for(cls, p: str) -> str:
basename = path.basename(p)
stashed_file_location = join(cls.dump_dir(), basename)
return stashed_file_location
@classmethod
def stash_a_path(cls, p: str):
if path.isdir(p):
rmtree(cls.stash_for(p), ignore_errors=True)
copytree(p, cls.stash_for(p))
else:
copyfile(p, cls.stash_for(p))
@classmethod
def retrieve_stashed_path(cls, p: str):
"""
Takes an original path, hopefully it is stashed somewhere
"""
if path.isdir(p):
rmtree(p, ignore_errors=True)
copytree(cls.stash_for(p), p)
else:
copyfile(cls.stash_for(p), p)
@classmethod
def pre_backup(cls):
tempdir = cls.dump_dir()
if not path.exists(tempdir):
makedirs(tempdir)
paths = listdir(tempdir)
for file in paths:
remove(file)
for p in [USERDATA_FILE, SECRETS_FILE, DKIM_DIR]:
cls.stash_a_path(p)
@classmethod
def dump_dir(cls) -> str:
"""
A directory we dump our settings into
"""
return cls.folders[0]
@classmethod
def post_restore(cls):
cls.merge_settings()
rmtree(cls.dump_dir(), ignore_errors=True)
services: list[Service] = [ services: list[Service] = [
Bitwarden(), Bitwarden(),
Gitea(), Forgejo(),
MailServer(), MailServer(),
Nextcloud(), Nextcloud(),
Pleroma(), Pleroma(),
Ocserv(), Ocserv(),
JitsiMeet(), JitsiMeet(),
Roundcube(),
ServiceManager(),
Prometheus(),
] ]
def get_all_services() -> list[Service]:
return services
def get_service_by_id(service_id: str) -> typing.Optional[Service]:
for service in services:
if service.get_id() == service_id:
return service
return None
def get_enabled_services() -> list[Service]:
return [service for service in services if service.is_enabled()]
def get_disabled_services() -> list[Service]:
return [service for service in services if not service.is_enabled()]
def get_services_by_location(location: str) -> list[Service]:
return [service for service in services if service.get_drive() == location]
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
dns_records: list[ServiceDnsRecord] = [
ServiceDnsRecord(
type="A",
name="api",
content=ip4,
ttl=3600,
display_name="SelfPrivacy API",
),
ServiceDnsRecord(
type="AAAA",
name="api",
content=ip6,
ttl=3600,
display_name="SelfPrivacy API (IPv6)",
),
]
for service in get_enabled_services():
dns_records += service.get_dns_records()
return dns_records

View file

@ -0,0 +1,5 @@
API_ICON = """
<svg width="33" height="33" viewBox="0 0 33 33" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M0.98671 4.79425C0.98671 2.58511 2.77757 0.79425 4.98671 0.79425H28.9867C31.1958 0.79425 32.9867 2.58511 32.9867 4.79425V28.7943C32.9867 31.0034 31.1958 32.7943 28.9867 32.7943H4.98671C2.77757 32.7943 0.98671 31.0034 0.98671 28.7943V4.79425ZM26.9867 21.1483L24.734 18.8956V18.8198H24.6582L22.5047 16.6674V18.8198H11.358V23.2785H22.5047V25.6315L26.9867 21.1483ZM9.23944 10.1584H9.26842L11.4688 7.95697V10.1584H22.6154V14.6171H11.4688V16.9233L6.98671 12.439L9.23944 10.1863V10.1584Z" fill="black"/>
</svg>
"""

View file

@ -1,21 +1,48 @@
"""Class representing Bitwarden service""" """Class representing Bitwarden service"""
import base64 import base64
import subprocess import subprocess
import typing from typing import List
from selfprivacy_api.jobs import Job, Jobs from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON
from selfprivacy_api.services.config_item import (
StringServiceConfigItem,
BoolServiceConfigItem,
ServiceConfigItem,
)
from selfprivacy_api.utils.regex_strings import SUBDOMAIN_REGEX
class Bitwarden(Service): class Bitwarden(Service):
"""Class representing Bitwarden service.""" """Class representing Bitwarden service."""
config_items: dict[str, ServiceConfigItem] = {
"subdomain": StringServiceConfigItem(
id="subdomain",
default_value="password",
description="Subdomain",
regex=SUBDOMAIN_REGEX,
widget="subdomain",
),
"signupsAllowed": BoolServiceConfigItem(
id="signupsAllowed",
default_value=True,
description="Allow new user signups",
),
"sendsAllowed": BoolServiceConfigItem(
id="sendsAllowed",
default_value=True,
description="Allow users to use Bitwarden Send",
),
"emergencyAccessAllowed": BoolServiceConfigItem(
id="emergencyAccessAllowed",
default_value=True,
description="Allow users to enable Emergency Access",
),
}
@staticmethod @staticmethod
def get_id() -> str: def get_id() -> str:
"""Return service id.""" """Return service id."""
@ -40,12 +67,6 @@ class Bitwarden(Service):
def get_user() -> str: def get_user() -> str:
return "vaultwarden" return "vaultwarden"
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://password.{domain}"
@staticmethod @staticmethod
def is_movable() -> bool: def is_movable() -> bool:
return True return True
@ -83,55 +104,10 @@ class Bitwarden(Service):
def restart(): def restart():
subprocess.run(["systemctl", "restart", "vaultwarden.service"]) subprocess.run(["systemctl", "restart", "vaultwarden.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod @staticmethod
def get_logs(): def get_logs():
return "" return ""
@staticmethod @staticmethod
def get_folders() -> typing.List[str]: def get_folders() -> List[str]:
return ["/var/lib/bitwarden", "/var/lib/bitwarden_rs"] return ["/var/lib/bitwarden", "/var/lib/bitwarden_rs"]
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
"""Return list of DNS records for Bitwarden service."""
return [
ServiceDnsRecord(
type="A",
name="password",
content=network_utils.get_ip4(),
ttl=3600,
display_name="Bitwarden",
),
ServiceDnsRecord(
type="AAAA",
name="password",
content=network_utils.get_ip6(),
ttl=3600,
display_name="Bitwarden (IPv6)",
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.add(
type_id="services.bitwarden.move",
name="Move Bitwarden",
description=f"Moving Bitwarden data to {volume.name}",
)
move_service(
self,
volume,
job,
FolderMoveNames.default_foldermoves(self),
"bitwarden",
)
return job

View file

@ -0,0 +1,245 @@
from abc import ABC, abstractmethod
import re
from typing import Optional
from selfprivacy_api.utils import (
ReadUserData,
WriteUserData,
check_if_subdomain_is_taken,
)
class ServiceConfigItem(ABC):
id: str
description: str
widget: str
type: str
@abstractmethod
def get_value(self, service_id):
pass
@abstractmethod
def set_value(self, value, service_id):
pass
@abstractmethod
def validate_value(self, value):
return True
def as_dict(self, service_options):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_options),
}
class StringServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: str,
description: str,
regex: Optional[str] = None,
widget: Optional[str] = None,
allow_empty: bool = False,
):
if widget == "subdomain" and not regex:
raise ValueError("Subdomain widget requires regex")
self.id = id
self.type = "string"
self.default_value = default_value
self.description = description
self.regex = re.compile(regex) if regex else None
self.widget = widget if widget else "text"
self.allow_empty = allow_empty
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not valid")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_options):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_options),
"default_value": self.default_value,
"regex": self.regex.pattern if self.regex else None,
}
def validate_value(self, value):
if not isinstance(value, str):
return False
if not self.allow_empty and not value:
return False
if self.regex and not self.regex.match(value):
return False
if self.widget == "subdomain":
if check_if_subdomain_is_taken(value):
return False
return True
class BoolServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: bool,
description: str,
widget: Optional[str] = None,
):
self.id = id
self.type = "bool"
self.default_value = default_value
self.description = description
self.widget = widget if widget else "switch"
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not a boolean")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_options):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_options),
"default_value": self.default_value,
}
def validate_value(self, value):
return isinstance(value, bool)
class EnumServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: str,
description: str,
options: list[str],
widget: Optional[str] = None,
):
self.id = id
self.type = "enum"
self.default_value = default_value
self.description = description
self.options = options
self.widget = widget if widget else "select"
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not in options")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_options):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_options),
"default_value": self.default_value,
"options": self.options,
}
def validate_value(self, value):
if not isinstance(value, str):
return False
return value in self.options
# TODO: unused for now
class IntServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: int,
description: str,
widget: Optional[str] = None,
min_value: Optional[int] = None,
max_value: Optional[int] = None,
) -> None:
self.id = id
self.type = "int"
self.default_value = default_value
self.description = description
self.widget = widget if widget else "number"
self.min_value = min_value
self.max_value = max_value
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not valid")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_options):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_options),
"default_value": self.default_value,
"min_value": self.min_value,
"max_value": self.max_value,
}
def validate_value(self, value):
if not isinstance(value, int):
return False
return (self.min_value is None or value >= self.min_value) and (
self.max_value is None or value <= self.max_value
)

View file

@ -0,0 +1,53 @@
import re
from typing import Tuple, Optional
FLAKE_CONFIG_PATH = "/etc/nixos/sp-modules/flake.nix"
class FlakeServiceManager:
def __enter__(self) -> "FlakeServiceManager":
self.services = {}
with open(FLAKE_CONFIG_PATH, "r") as file:
for line in file:
service_name, url = self._extract_services(input_string=line)
if service_name and url:
self.services[service_name] = url
return self
def _extract_services(
self, input_string: str
) -> Tuple[Optional[str], Optional[str]]:
pattern = r"inputs\.([\w-]+)\.url\s*=\s*([\S]+);"
match = re.search(pattern, input_string)
if match:
variable_name = match.group(1)
url = match.group(2)
return variable_name, url
else:
return None, None
def __exit__(self, exc_type, exc_value, traceback) -> None:
with open(FLAKE_CONFIG_PATH, "w") as file:
file.write(
"""
{
description = "SelfPrivacy NixOS PoC modules/extensions/bundles/packages/etc";\n
"""
)
for key, value in self.services.items():
file.write(
f"""
inputs.{key}.url = {value};
"""
)
file.write(
"""
outputs = _: { };
}
"""
)

View file

@ -0,0 +1,138 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
from typing import List
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.forgejo.icon import FORGEJO_ICON
from selfprivacy_api.services.config_item import (
StringServiceConfigItem,
BoolServiceConfigItem,
EnumServiceConfigItem,
ServiceConfigItem,
)
from selfprivacy_api.utils.regex_strings import SUBDOMAIN_REGEX
class Forgejo(Service):
"""Class representing Forgejo service.
Previously was Gitea, so some IDs are still called gitea for compatibility.
"""
config_items: dict[str, ServiceConfigItem] = {
"subdomain": StringServiceConfigItem(
id="subdomain",
default_value="git",
description="Subdomain",
regex=SUBDOMAIN_REGEX,
widget="subdomain",
),
"appName": StringServiceConfigItem(
id="appName",
default_value="SelfPrivacy git Service",
description="The name displayed in the web interface",
),
"enableLfs": BoolServiceConfigItem(
id="enableLfs",
default_value=True,
description="Enable Git LFS",
),
"forcePrivate": BoolServiceConfigItem(
id="forcePrivate",
default_value=False,
description="Force all new repositories to be private",
),
"disableRegistration": BoolServiceConfigItem(
id="disableRegistration",
default_value=False,
description="Disable registration of new users",
),
"requireSigninView": BoolServiceConfigItem(
id="requireSigninView",
default_value=False,
description="Force users to log in to view any page",
),
"defaultTheme": EnumServiceConfigItem(
id="defaultTheme",
default_value="forgejo-auto",
description="Default theme",
options=[
"forgejo-auto",
"forgejo-light",
"forgejo-dark",
"gitea-auto",
"gitea-light",
"gitea-dark",
],
),
}
@staticmethod
def get_id() -> str:
"""Return service id. For compatibility keep in gitea."""
return "gitea"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Forgejo"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Forgejo is a Git forge."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(FORGEJO_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Git repositories, database and user data."
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Gitea status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("forgejo.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "forgejo.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "forgejo.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "forgejo.service"])
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> List[str]:
"""The data folder is still called gitea for compatibility."""
return ["/var/lib/gitea"]

View file

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 1.3 KiB

View file

@ -1,4 +1,4 @@
GITEA_ICON = """ FORGEJO_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg"> <svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.60007 10.5899L8.38007 4.79995L10.0701 6.49995C9.83007 7.34995 10.2201 8.27995 11.0001 8.72995V14.2699C10.4001 14.6099 10.0001 15.2599 10.0001 15.9999C10.0001 16.5304 10.2108 17.0391 10.5859 17.4142C10.9609 17.7892 11.4696 17.9999 12.0001 17.9999C12.5305 17.9999 13.0392 17.7892 13.4143 17.4142C13.7894 17.0391 14.0001 16.5304 14.0001 15.9999C14.0001 15.2599 13.6001 14.6099 13.0001 14.2699V9.40995L15.0701 11.4999C15.0001 11.6499 15.0001 11.8199 15.0001 11.9999C15.0001 12.5304 15.2108 13.0391 15.5859 13.4142C15.9609 13.7892 16.4696 13.9999 17.0001 13.9999C17.5305 13.9999 18.0392 13.7892 18.4143 13.4142C18.7894 13.0391 19.0001 12.5304 19.0001 11.9999C19.0001 11.4695 18.7894 10.9608 18.4143 10.5857C18.0392 10.2107 17.5305 9.99995 17.0001 9.99995C16.8201 9.99995 16.6501 9.99995 16.5001 10.0699L13.9301 7.49995C14.1901 6.56995 13.7101 5.54995 12.7801 5.15995C12.3501 4.99995 11.9001 4.95995 11.5001 5.06995L9.80007 3.37995L10.5901 2.59995C11.3701 1.80995 12.6301 1.80995 13.4101 2.59995L21.4001 10.5899C22.1901 11.3699 22.1901 12.6299 21.4001 13.4099L13.4101 21.3999C12.6301 22.1899 11.3701 22.1899 10.5901 21.3999L2.60007 13.4099C1.81007 12.6299 1.81007 11.3699 2.60007 10.5899Z" fill="black"/> <path d="M2.60007 10.5899L8.38007 4.79995L10.0701 6.49995C9.83007 7.34995 10.2201 8.27995 11.0001 8.72995V14.2699C10.4001 14.6099 10.0001 15.2599 10.0001 15.9999C10.0001 16.5304 10.2108 17.0391 10.5859 17.4142C10.9609 17.7892 11.4696 17.9999 12.0001 17.9999C12.5305 17.9999 13.0392 17.7892 13.4143 17.4142C13.7894 17.0391 14.0001 16.5304 14.0001 15.9999C14.0001 15.2599 13.6001 14.6099 13.0001 14.2699V9.40995L15.0701 11.4999C15.0001 11.6499 15.0001 11.8199 15.0001 11.9999C15.0001 12.5304 15.2108 13.0391 15.5859 13.4142C15.9609 13.7892 16.4696 13.9999 17.0001 13.9999C17.5305 13.9999 18.0392 13.7892 18.4143 13.4142C18.7894 13.0391 19.0001 12.5304 19.0001 11.9999C19.0001 11.4695 18.7894 10.9608 18.4143 10.5857C18.0392 10.2107 17.5305 9.99995 17.0001 9.99995C16.8201 9.99995 16.6501 9.99995 16.5001 10.0699L13.9301 7.49995C14.1901 6.56995 13.7101 5.54995 12.7801 5.15995C12.3501 4.99995 11.9001 4.95995 11.5001 5.06995L9.80007 3.37995L10.5901 2.59995C11.3701 1.80995 12.6301 1.80995 13.4101 2.59995L21.4001 10.5899C22.1901 11.3699 22.1901 12.6299 21.4001 13.4099L13.4101 21.3999C12.6301 22.1899 11.3701 22.1899 10.5901 21.3999L2.60007 13.4099C1.81007 12.6299 1.81007 11.3699 2.60007 10.5899Z" fill="black"/>
</svg> </svg>

View file

@ -1,260 +0,0 @@
"""Generic handler for moving services"""
from __future__ import annotations
import subprocess
import time
import pathlib
import shutil
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.owned_path import OwnedPath
class FolderMoveNames(BaseModel):
name: str
bind_location: str
owner: str
group: str
@staticmethod
def from_owned_path(path: OwnedPath) -> FolderMoveNames:
return FolderMoveNames(
name=FolderMoveNames.get_foldername(path.path),
bind_location=path.path,
owner=path.owner,
group=path.group,
)
@staticmethod
def get_foldername(path: str) -> str:
return path.split("/")[-1]
@staticmethod
def default_foldermoves(service: Service) -> list[FolderMoveNames]:
return [
FolderMoveNames.from_owned_path(folder)
for folder in service.get_owned_folders()
]
@huey.task()
def move_service(
service: Service,
volume: BlockDevice,
job: Job,
folder_names: list[FolderMoveNames],
userdata_location: str,
):
"""Move a service to another volume."""
job = Jobs.update(
job=job,
status_text="Performing pre-move checks...",
status=JobStatus.RUNNING,
)
service_name = service.get_display_name()
with ReadUserData() as user_data:
if not user_data.get("useBinds", False):
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Server is not using binds.",
)
return
# Check if we are on the same volume
old_volume = service.get_drive()
if old_volume == volume.name:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is already on this volume.",
)
return
# Check if there is enough space on the new volume
if int(volume.fsavail) < service.get_storage_usage():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Not enough space on the new volume.",
)
return
# Make sure the volume is mounted
if not volume.is_root() and f"/volumes/{volume.name}" not in volume.mountpoints:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Volume is not mounted.",
)
return
# Make sure current actual directory exists and if its user and group are correct
for folder in folder_names:
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").exists():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is not found.",
)
return
if not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").is_dir():
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} is not a directory.",
)
return
if (
not pathlib.Path(f"/volumes/{old_volume}/{folder.name}").owner()
== folder.owner
):
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} owner is not {folder.owner}.",
)
return
# Stop service
Jobs.update(
job=job,
status=JobStatus.RUNNING,
status_text=f"Stopping {service_name}...",
progress=5,
)
service.stop()
# Wait for the service to stop, check every second
# If it does not stop in 30 seconds, abort
for _ in range(30):
if service.get_status() not in (
ServiceStatus.ACTIVATING,
ServiceStatus.DEACTIVATING,
):
break
time.sleep(1)
else:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=f"{service_name} did not stop in 30 seconds.",
)
return
# Unmount old volume
Jobs.update(
job=job,
status_text="Unmounting old folder...",
status=JobStatus.RUNNING,
progress=10,
)
for folder in folder_names:
try:
subprocess.run(
["umount", folder.bind_location],
check=True,
)
except subprocess.CalledProcessError:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Unable to unmount old volume.",
)
return
# Move data to new volume and set correct permissions
Jobs.update(
job=job,
status_text="Moving data to new volume...",
status=JobStatus.RUNNING,
progress=20,
)
current_progress = 20
folder_percentage = 50 // len(folder_names)
for folder in folder_names:
shutil.move(
f"/volumes/{old_volume}/{folder.name}",
f"/volumes/{volume.name}/{folder.name}",
)
Jobs.update(
job=job,
status_text="Moving data to new volume...",
status=JobStatus.RUNNING,
progress=current_progress + folder_percentage,
)
Jobs.update(
job=job,
status_text=f"Making sure {service_name} owns its files...",
status=JobStatus.RUNNING,
progress=70,
)
for folder in folder_names:
try:
subprocess.run(
[
"chown",
"-R",
f"{folder.owner}:{folder.group}",
f"/volumes/{volume.name}/{folder.name}",
],
check=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
Jobs.update(
job=job,
status=JobStatus.RUNNING,
error=f"Unable to set ownership of new volume. {service_name} may not be able to access its files. Continuing anyway.",
)
# Mount new volume
Jobs.update(
job=job,
status_text=f"Mounting {service_name} data...",
status=JobStatus.RUNNING,
progress=90,
)
for folder in folder_names:
try:
subprocess.run(
[
"mount",
"--bind",
f"/volumes/{volume.name}/{folder.name}",
folder.bind_location,
],
check=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Unable to mount new volume.",
)
return
# Update userdata
Jobs.update(
job=job,
status_text="Finishing move...",
status=JobStatus.RUNNING,
progress=95,
)
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if userdata_location not in user_data["modules"]:
user_data["modules"][userdata_location] = {}
user_data["modules"][userdata_location]["location"] = volume.name
# Start service
service.start()
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result=f"{service_name} moved successfully.",
status_text=f"Starting {service_name}...",
progress=100,
)

View file

@ -1,4 +1,5 @@
"""Generic size counter using pathlib""" """Generic size counter using pathlib"""
import pathlib import pathlib

View file

@ -1,131 +0,0 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.gitea.icon import GITEA_ICON
class Gitea(Service):
"""Class representing Gitea service"""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "gitea"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Gitea"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Gitea is a Git forge."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(GITEA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://git.{domain}"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Git repositories, database and user data."
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Gitea status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("gitea.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "gitea.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "gitea.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "gitea.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> typing.List[str]:
return ["/var/lib/gitea"]
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="git",
content=network_utils.get_ip4(),
ttl=3600,
display_name="Gitea",
),
ServiceDnsRecord(
type="AAAA",
name="git",
content=network_utils.get_ip6(),
ttl=3600,
display_name="Gitea (IPv6)",
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.add(
type_id="services.gitea.move",
name="Move Gitea",
description=f"Moving Gitea data to {volume.name}",
)
move_service(
self,
volume,
job,
FolderMoveNames.default_foldermoves(self),
"gitea",
)
return job

View file

@ -1,22 +1,41 @@
"""Class representing Jitsi Meet service""" """Class representing Jitsi Meet service"""
import base64 import base64
import subprocess import subprocess
import typing from typing import List
from selfprivacy_api.jobs import Job from selfprivacy_api.jobs import Job
from selfprivacy_api.services.generic_status_getter import ( from selfprivacy_api.utils.systemd import (
get_service_status_from_several_units, get_service_status_from_several_units,
) )
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.jitsimeet.icon import JITSI_ICON from selfprivacy_api.services.jitsimeet.icon import JITSI_ICON
from selfprivacy_api.services.config_item import (
StringServiceConfigItem,
ServiceConfigItem,
)
from selfprivacy_api.utils.regex_strings import SUBDOMAIN_REGEX
class JitsiMeet(Service): class JitsiMeet(Service):
"""Class representing Jitsi service""" """Class representing Jitsi service"""
config_items: dict[str, ServiceConfigItem] = {
"subdomain": StringServiceConfigItem(
id="subdomain",
default_value="meet",
description="Subdomain",
regex=SUBDOMAIN_REGEX,
widget="subdomain",
),
"appName": StringServiceConfigItem(
id="appName",
default_value="Jitsi Meet",
description="The name displayed in the web interface",
),
}
@staticmethod @staticmethod
def get_id() -> str: def get_id() -> str:
"""Return service id.""" """Return service id."""
@ -37,12 +56,6 @@ class JitsiMeet(Service):
"""Read SVG icon from file and return it as base64 encoded string.""" """Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8") return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://meet.{domain}"
@staticmethod @staticmethod
def is_movable() -> bool: def is_movable() -> bool:
return False return False
@ -58,69 +71,43 @@ class JitsiMeet(Service):
@staticmethod @staticmethod
def get_status() -> ServiceStatus: def get_status() -> ServiceStatus:
return get_service_status_from_several_units( return get_service_status_from_several_units(
["jitsi-videobridge.service", "jicofo.service"] ["prosody.service", "jitsi-videobridge2.service", "jicofo.service"]
) )
@staticmethod @staticmethod
def stop(): def stop():
subprocess.run( subprocess.run(
["systemctl", "stop", "jitsi-videobridge.service"], ["systemctl", "stop", "jitsi-videobridge2.service"],
check=False, check=False,
) )
subprocess.run(["systemctl", "stop", "jicofo.service"], check=False) subprocess.run(["systemctl", "stop", "jicofo.service"], check=False)
subprocess.run(["systemctl", "stop", "prosody.service"], check=False)
@staticmethod @staticmethod
def start(): def start():
subprocess.run(["systemctl", "start", "prosody.service"], check=False)
subprocess.run( subprocess.run(
["systemctl", "start", "jitsi-videobridge.service"], ["systemctl", "start", "jitsi-videobridge2.service"],
check=False, check=False,
) )
subprocess.run(["systemctl", "start", "jicofo.service"], check=False) subprocess.run(["systemctl", "start", "jicofo.service"], check=False)
@staticmethod @staticmethod
def restart(): def restart():
subprocess.run(["systemctl", "restart", "prosody.service"], check=False)
subprocess.run( subprocess.run(
["systemctl", "restart", "jitsi-videobridge.service"], ["systemctl", "restart", "jitsi-videobridge2.service"],
check=False, check=False,
) )
subprocess.run(["systemctl", "restart", "jicofo.service"], check=False) subprocess.run(["systemctl", "restart", "jicofo.service"], check=False)
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod @staticmethod
def get_logs(): def get_logs():
return "" return ""
@staticmethod @staticmethod
def get_folders() -> typing.List[str]: def get_folders() -> List[str]:
return ["/var/lib/jitsi-meet"] return ["/var/lib/jitsi-meet"]
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
return [
ServiceDnsRecord(
type="A",
name="meet",
content=ip4,
ttl=3600,
display_name="Jitsi",
),
ServiceDnsRecord(
type="AAAA",
name="meet",
content=ip6,
ttl=3600,
display_name="Jitsi (IPv6)",
),
]
def move_to_volume(self, volume: BlockDevice) -> Job: def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("jitsi-meet service is not movable") raise NotImplementedError("jitsi-meet service is not movable")

View file

@ -2,17 +2,13 @@
import base64 import base64
import subprocess import subprocess
import typing from typing import Optional, List
from selfprivacy_api.jobs import Job, Jobs from selfprivacy_api.utils.systemd import (
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_status_getter import (
get_service_status_from_several_units, get_service_status_from_several_units,
) )
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api import utils from selfprivacy_api import utils
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.mailserver.icon import MAILSERVER_ICON from selfprivacy_api.services.mailserver.icon import MAILSERVER_ICON
@ -39,11 +35,15 @@ class MailServer(Service):
def get_user() -> str: def get_user() -> str:
return "virtualMail" return "virtualMail"
@staticmethod @classmethod
def get_url() -> typing.Optional[str]: def get_url(cls) -> Optional[str]:
"""Return service url.""" """Return service url."""
return None return None
@classmethod
def get_subdomain(cls) -> Optional[str]:
return None
@staticmethod @staticmethod
def is_movable() -> bool: def is_movable() -> bool:
return True return True
@ -89,33 +89,23 @@ class MailServer(Service):
subprocess.run(["systemctl", "restart", "dovecot2.service"], check=False) subprocess.run(["systemctl", "restart", "dovecot2.service"], check=False)
subprocess.run(["systemctl", "restart", "postfix.service"], check=False) subprocess.run(["systemctl", "restart", "postfix.service"], check=False)
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod @staticmethod
def get_logs(): def get_logs():
return "" return ""
@staticmethod @staticmethod
def get_folders() -> typing.List[str]: def get_folders() -> List[str]:
return ["/var/vmail", "/var/sieve"] return ["/var/vmail", "/var/sieve"]
@staticmethod @classmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]: def get_dns_records(cls, ip4: str, ip6: Optional[str]) -> List[ServiceDnsRecord]:
domain = utils.get_domain() domain = utils.get_domain()
dkim_record = utils.get_dkim_key(domain) dkim_record = utils.get_dkim_key(domain)
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
if dkim_record is None: if dkim_record is None:
return [] return []
return [ dns_records = [
ServiceDnsRecord( ServiceDnsRecord(
type="A", type="A",
name=domain, name=domain,
@ -123,13 +113,6 @@ class MailServer(Service):
ttl=3600, ttl=3600,
display_name="Root Domain", display_name="Root Domain",
), ),
ServiceDnsRecord(
type="AAAA",
name=domain,
content=ip6,
ttl=3600,
display_name="Root Domain (IPv6)",
),
ServiceDnsRecord( ServiceDnsRecord(
type="MX", type="MX",
name=domain, name=domain,
@ -161,19 +144,14 @@ class MailServer(Service):
), ),
] ]
def move_to_volume(self, volume: BlockDevice) -> Job: if ip6 is not None:
job = Jobs.add( dns_records.append(
type_id="services.email.move", ServiceDnsRecord(
name="Move Mail Server", type="AAAA",
description=f"Moving mailserver data to {volume.name}", name=domain,
) content=ip6,
ttl=3600,
move_service( display_name="Root Domain (IPv6)",
self, ),
volume, )
job, return dns_records
FolderMoveNames.default_foldermoves(self),
"simple-nixos-mailserver",
)
return job

View file

@ -0,0 +1,72 @@
"""Generic handler for moving services"""
from __future__ import annotations
import shutil
from typing import List
from selfprivacy_api.jobs import Job, report_progress
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.owned_path import Bind
class MoveError(Exception):
"""Move of the data has failed"""
def check_volume(volume: BlockDevice, space_needed: int) -> None:
# Check if there is enough space on the new volume
if int(volume.fsavail) < space_needed:
raise MoveError("Not enough space on the new volume.")
# Make sure the volume is mounted
if not volume.is_root() and f"/volumes/{volume.name}" not in volume.mountpoints:
raise MoveError("Volume is not mounted.")
def check_binds(volume_name: str, binds: List[Bind]) -> None:
# Make sure current actual directory exists and if its user and group are correct
for bind in binds:
bind.validate()
def unbind_folders(owned_folders: List[Bind]) -> None:
for folder in owned_folders:
folder.unbind()
# May be moved into Bind
def move_data_to_volume(
binds: List[Bind],
new_volume: BlockDevice,
job: Job,
) -> List[Bind]:
current_progress = job.progress
if current_progress is None:
current_progress = 0
progress_per_folder = 50 // len(binds)
for bind in binds:
old_location = bind.location_at_volume()
bind.drive = new_volume
new_location = bind.location_at_volume()
try:
shutil.move(old_location, new_location)
except Exception as error:
raise MoveError(
f"could not move {old_location} to {new_location} : {str(error)}"
) from error
progress = current_progress + progress_per_folder
report_progress(progress, job, "Moving data to new volume...")
return binds
def ensure_folder_ownership(folders: List[Bind]) -> None:
for folder in folders:
folder.ensure_ownership()
def bind_folders(folders: List[Bind]):
for folder in folders:
folder.bind()

View file

@ -1,20 +1,33 @@
"""Class representing Nextcloud service.""" """Class representing Nextcloud service."""
import base64 import base64
import subprocess import subprocess
import typing from typing import List
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.generic_status_getter import get_service_status from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.nextcloud.icon import NEXTCLOUD_ICON from selfprivacy_api.services.nextcloud.icon import NEXTCLOUD_ICON
from selfprivacy_api.services.config_item import (
StringServiceConfigItem,
ServiceConfigItem,
)
from selfprivacy_api.utils.regex_strings import SUBDOMAIN_REGEX
class Nextcloud(Service): class Nextcloud(Service):
"""Class representing Nextcloud service.""" """Class representing Nextcloud service."""
config_items: dict[str, ServiceConfigItem] = {
"subdomain": StringServiceConfigItem(
id="subdomain",
default_value="cloud",
description="Subdomain",
regex=SUBDOMAIN_REGEX,
widget="subdomain",
),
}
@staticmethod @staticmethod
def get_id() -> str: def get_id() -> str:
"""Return service id.""" """Return service id."""
@ -35,12 +48,6 @@ class Nextcloud(Service):
"""Read SVG icon from file and return it as base64 encoded string.""" """Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(NEXTCLOUD_ICON.encode("utf-8")).decode("utf-8") return base64.b64encode(NEXTCLOUD_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://cloud.{domain}"
@staticmethod @staticmethod
def is_movable() -> bool: def is_movable() -> bool:
return True return True
@ -81,54 +88,11 @@ class Nextcloud(Service):
"""Restart Nextcloud service.""" """Restart Nextcloud service."""
subprocess.Popen(["systemctl", "restart", "phpfpm-nextcloud.service"]) subprocess.Popen(["systemctl", "restart", "phpfpm-nextcloud.service"])
@staticmethod
def get_configuration() -> dict:
"""Return Nextcloud configuration."""
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod @staticmethod
def get_logs(): def get_logs():
"""Return Nextcloud logs.""" """Return Nextcloud logs."""
return "" return ""
@staticmethod @staticmethod
def get_folders() -> typing.List[str]: def get_folders() -> List[str]:
return ["/var/lib/nextcloud"] return ["/var/lib/nextcloud"]
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="cloud",
content=network_utils.get_ip4(),
ttl=3600,
display_name="Nextcloud",
),
ServiceDnsRecord(
type="AAAA",
name="cloud",
content=network_utils.get_ip6(),
ttl=3600,
display_name="Nextcloud (IPv6)",
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.add(
type_id="services.nextcloud.move",
name="Move Nextcloud",
description=f"Moving Nextcloud to volume {volume.name}",
)
move_service(
self,
volume,
job,
FolderMoveNames.default_foldermoves(self),
"nextcloud",
)
return job

View file

@ -1,14 +1,13 @@
"""Class representing ocserv service.""" """Class representing ocserv service."""
import base64 import base64
import subprocess import subprocess
import typing import typing
from selfprivacy_api.jobs import Job from selfprivacy_api.jobs import Job
from selfprivacy_api.services.generic_status_getter import get_service_status from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.block_devices import BlockDevice from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.ocserv.icon import OCSERV_ICON from selfprivacy_api.services.ocserv.icon import OCSERV_ICON
import selfprivacy_api.utils.network as network_utils
class Ocserv(Service): class Ocserv(Service):
@ -30,8 +29,8 @@ class Ocserv(Service):
def get_svg_icon() -> str: def get_svg_icon() -> str:
return base64.b64encode(OCSERV_ICON.encode("utf-8")).decode("utf-8") return base64.b64encode(OCSERV_ICON.encode("utf-8")).decode("utf-8")
@staticmethod @classmethod
def get_url() -> typing.Optional[str]: def get_url(cls) -> typing.Optional[str]:
"""Return service url.""" """Return service url."""
return None return None
@ -67,37 +66,18 @@ class Ocserv(Service):
def restart(): def restart():
subprocess.run(["systemctl", "restart", "ocserv.service"], check=False) subprocess.run(["systemctl", "restart", "ocserv.service"], check=False)
@staticmethod @classmethod
def get_configuration(): def get_configuration(cls):
return {} return {}
@staticmethod @classmethod
def set_configuration(config_items): def set_configuration(cls, config_items):
return super().set_configuration(config_items) return super().set_configuration(config_items)
@staticmethod @staticmethod
def get_logs(): def get_logs():
return "" return ""
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="vpn",
content=network_utils.get_ip4(),
ttl=3600,
display_name="OpenConnect VPN",
),
ServiceDnsRecord(
type="AAAA",
name="vpn",
content=network_utils.get_ip6(),
ttl=3600,
display_name="OpenConnect VPN (IPv6)",
),
]
@staticmethod @staticmethod
def get_folders() -> typing.List[str]: def get_folders() -> typing.List[str]:
return [] return []

View file

@ -1,7 +1,126 @@
from __future__ import annotations
import subprocess
import pathlib
from pydantic import BaseModel from pydantic import BaseModel
from os.path import exists
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
# tests override it to a tmpdir
VOLUMES_PATH = "/volumes"
class BindError(Exception):
pass
class OwnedPath(BaseModel): class OwnedPath(BaseModel):
"""
A convenient interface for explicitly defining ownership of service folders.
One overrides Service.get_owned_paths() for this.
Why this exists?:
One could use Bind to define ownership but then one would need to handle drive which
is unnecessary and produces code duplication.
It is also somewhat semantically wrong to include Owned Path into Bind
instead of user and group. Because owner and group in Bind are applied to
the original folder on the drive, not to the binding path. But maybe it is
ok since they are technically both owned. Idk yet.
"""
path: str path: str
owner: str owner: str
group: str group: str
class Bind:
"""
A directory that resides on some volume but we mount it into fs where we need it.
Used for storing service data.
"""
def __init__(self, binding_path: str, owner: str, group: str, drive: BlockDevice):
self.binding_path = binding_path
self.owner = owner
self.group = group
self.drive = drive
# TODO: delete owned path interface from Service
@staticmethod
def from_owned_path(path: OwnedPath, drive_name: str) -> Bind:
drive = BlockDevices().get_block_device(drive_name)
if drive is None:
raise BindError(f"No such drive: {drive_name}")
return Bind(
binding_path=path.path, owner=path.owner, group=path.group, drive=drive
)
def bind_foldername(self) -> str:
return self.binding_path.split("/")[-1]
def location_at_volume(self) -> str:
return f"{VOLUMES_PATH}/{self.drive.name}/{self.bind_foldername()}"
def validate(self) -> None:
path = pathlib.Path(self.location_at_volume())
if not path.exists():
raise BindError(f"directory {path} is not found.")
if not path.is_dir():
raise BindError(f"{path} is not a directory.")
if path.owner() != self.owner:
raise BindError(f"{path} is not owned by {self.owner}.")
def bind(self) -> None:
if not exists(self.binding_path):
raise BindError(f"cannot bind to a non-existing path: {self.binding_path}")
source = self.location_at_volume()
target = self.binding_path
try:
subprocess.run(
["mount", "--bind", source, target],
stderr=subprocess.PIPE,
check=True,
)
except subprocess.CalledProcessError as error:
print(error.stderr)
raise BindError(f"Unable to bind {source} to {target} :{error.stderr}")
def unbind(self) -> None:
if not exists(self.binding_path):
raise BindError(f"cannot unbind a non-existing path: {self.binding_path}")
try:
subprocess.run(
# umount -l ?
["umount", self.binding_path],
check=True,
)
except subprocess.CalledProcessError:
raise BindError(f"Unable to unmount folder {self.binding_path}.")
pass
def ensure_ownership(self) -> None:
true_location = self.location_at_volume()
try:
subprocess.run(
[
"chown",
"-R",
f"{self.owner}:{self.group}",
# Could we just chown the binded location instead?
true_location,
],
check=True,
stderr=subprocess.PIPE,
)
except subprocess.CalledProcessError as error:
print(error.stderr)
error_message = (
f"Unable to set ownership of {true_location} :{error.stderr}"
)
raise BindError(error_message)

View file

@ -1,15 +1,13 @@
"""Class representing Nextcloud service.""" """Class representing Nextcloud service."""
import base64 import base64
import subprocess import subprocess
import typing from typing import List
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.services.generic_service_mover import FolderMoveNames, move_service
from selfprivacy_api.services.generic_status_getter import get_service_status
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus
from selfprivacy_api.services.owned_path import OwnedPath from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.utils.block_devices import BlockDevice from selfprivacy_api.services.service import Service, ServiceStatus
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.pleroma.icon import PLEROMA_ICON from selfprivacy_api.services.pleroma.icon import PLEROMA_ICON
@ -32,12 +30,6 @@ class Pleroma(Service):
def get_svg_icon() -> str: def get_svg_icon() -> str:
return base64.b64encode(PLEROMA_ICON.encode("utf-8")).decode("utf-8") return base64.b64encode(PLEROMA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://social.{domain}"
@staticmethod @staticmethod
def is_movable() -> bool: def is_movable() -> bool:
return True return True
@ -69,12 +61,12 @@ class Pleroma(Service):
subprocess.run(["systemctl", "restart", "pleroma.service"]) subprocess.run(["systemctl", "restart", "pleroma.service"])
subprocess.run(["systemctl", "restart", "postgresql.service"]) subprocess.run(["systemctl", "restart", "postgresql.service"])
@staticmethod @classmethod
def get_configuration(config_items): def get_configuration(cls):
return {} return {}
@staticmethod @classmethod
def set_configuration(config_items): def set_configuration(cls, config_items):
return super().set_configuration(config_items) return super().set_configuration(config_items)
@staticmethod @staticmethod
@ -82,10 +74,10 @@ class Pleroma(Service):
return "" return ""
@staticmethod @staticmethod
def get_owned_folders() -> typing.List[OwnedPath]: def get_owned_folders() -> List[OwnedPath]:
""" """
Get a list of occupied directories with ownership info Get a list of occupied directories with ownership info
pleroma has folders that are owned by different users Pleroma has folders that are owned by different users
""" """
return [ return [
OwnedPath( OwnedPath(
@ -99,37 +91,3 @@ class Pleroma(Service):
group="postgres", group="postgres",
), ),
] ]
@staticmethod
def get_dns_records() -> typing.List[ServiceDnsRecord]:
return [
ServiceDnsRecord(
type="A",
name="social",
content=network_utils.get_ip4(),
ttl=3600,
display_name="Pleroma",
),
ServiceDnsRecord(
type="AAAA",
name="social",
content=network_utils.get_ip6(),
ttl=3600,
display_name="Pleroma (IPv6)",
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.add(
type_id="services.pleroma.move",
name="Move Pleroma",
description=f"Moving Pleroma to volume {volume.name}",
)
move_service(
self,
volume,
job,
FolderMoveNames.default_foldermoves(self),
"pleroma",
)
return job

View file

@ -0,0 +1,86 @@
"""Class representing Nextcloud service."""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.prometheus.icon import PROMETHEUS_ICON
class Prometheus(Service):
"""Class representing Prometheus service."""
@staticmethod
def get_id() -> str:
return "monitoring"
@staticmethod
def get_display_name() -> str:
return "Prometheus"
@staticmethod
def get_description() -> str:
return "Prometheus is used for resource monitoring and alerts."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(PROMETHEUS_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
return None
@staticmethod
def get_subdomain() -> Optional[str]:
return None
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return True
@staticmethod
def can_be_backed_up() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Backups are not available for Prometheus."
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status("prometheus.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "prometheus.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "prometheus.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "prometheus.service"])
@staticmethod
def get_logs():
return ""
@staticmethod
def get_owned_folders() -> List[OwnedPath]:
return [
OwnedPath(
path="/var/lib/prometheus",
owner="prometheus",
group="prometheus",
),
]

View file

@ -0,0 +1,5 @@
PROMETHEUS_ICON = """
<svg width="128" height="128" viewBox="0 0 128 128" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M64.125 0.51C99.229 0.517 128.045 29.133 128 63.951C127.955 99.293 99.258 127.515 63.392 127.49C28.325 127.466 -0.0249987 98.818 1.26289e-06 63.434C0.0230013 28.834 28.898 0.503 64.125 0.51ZM44.72 22.793C45.523 26.753 44.745 30.448 43.553 34.082C42.73 36.597 41.591 39.022 40.911 41.574C39.789 45.777 38.52 50.004 38.052 54.3C37.381 60.481 39.81 65.925 43.966 71.34L24.86 67.318C24.893 67.92 24.86 68.148 24.925 68.342C26.736 73.662 29.923 78.144 33.495 82.372C33.872 82.818 34.732 83.046 35.372 83.046C54.422 83.084 73.473 83.08 92.524 83.055C93.114 83.055 93.905 82.945 94.265 82.565C98.349 78.271 101.47 73.38 103.425 67.223L83.197 71.185C84.533 68.567 86.052 66.269 86.93 63.742C89.924 55.099 88.682 46.744 84.385 38.862C80.936 32.538 77.754 26.242 79.475 18.619C75.833 22.219 74.432 26.798 73.543 31.517C72.671 36.167 72.154 40.881 71.478 45.6C71.38 45.457 71.258 45.35 71.236 45.227C71.1507 44.7338 71.0919 44.2365 71.06 43.737C70.647 36.011 69.14 28.567 65.954 21.457C64.081 17.275 62.013 12.995 63.946 8.001C62.639 8.694 61.456 9.378 60.608 10.357C58.081 13.277 57.035 16.785 56.766 20.626C56.535 23.908 56.22 27.205 55.61 30.432C54.97 33.824 53.96 37.146 51.678 40.263C50.76 33.607 50.658 27.019 44.722 22.793H44.72ZM93.842 88.88H34.088V99.26H93.842V88.88ZM45.938 104.626C45.889 113.268 54.691 119.707 65.571 119.24C74.591 118.851 82.57 111.756 81.886 104.626H45.938Z" fill="black"/>
</svg>
"""

View file

@ -0,0 +1,104 @@
"""Class representing Roundcube service"""
import base64
import subprocess
from typing import List
from selfprivacy_api.jobs import Job
from selfprivacy_api.utils.systemd import (
get_service_status_from_several_units,
)
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.roundcube.icon import ROUNDCUBE_ICON
from selfprivacy_api.services.config_item import (
StringServiceConfigItem,
ServiceConfigItem,
)
from selfprivacy_api.utils.regex_strings import SUBDOMAIN_REGEX
class Roundcube(Service):
"""Class representing roundcube service"""
config_items: dict[str, ServiceConfigItem] = {
"subdomain": StringServiceConfigItem(
id="subdomain",
default_value="roundcube",
description="Subdomain",
regex=SUBDOMAIN_REGEX,
widget="subdomain",
),
}
@staticmethod
def get_id() -> str:
"""Return service id."""
return "roundcube"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Roundcube"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Roundcube is an open source webmail software."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(ROUNDCUBE_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def can_be_backed_up() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Nothing to backup."
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status_from_several_units(["phpfpm-roundcube.service"])
@staticmethod
def stop():
subprocess.run(
["systemctl", "stop", "phpfpm-roundcube.service"],
check=False,
)
@staticmethod
def start():
subprocess.run(
["systemctl", "start", "phpfpm-roundcube.service"],
check=False,
)
@staticmethod
def restart():
subprocess.run(
["systemctl", "restart", "phpfpm-roundcube.service"],
check=False,
)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> List[str]:
return []
def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("roundcube service is not movable")

View file

@ -0,0 +1,7 @@
ROUNDCUBE_ICON = """
<svg fill="none" version="1.1" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
<g transform="translate(29.07 -.3244)">
<path d="m-17.02 2.705c-4.01 2e-7 -7.283 3.273-7.283 7.283 0 0.00524-1.1e-5 0.01038 0 0.01562l-1.85 1.068v5.613l9.105 5.26 9.104-5.26v-5.613l-1.797-1.037c1.008e-4 -0.01573 0.00195-0.03112 0.00195-0.04688-1e-7 -4.01-3.271-7.283-7.281-7.283zm0 2.012c2.923 1e-7 5.27 2.349 5.27 5.271 0 2.923-2.347 5.27-5.27 5.27-2.923-1e-6 -5.271-2.347-5.271-5.27 0-2.923 2.349-5.271 5.271-5.271z" fill="#000" fill-rule="evenodd" stroke-linejoin="bevel"/>
</g>
</svg>
"""

View file

@ -1,49 +1,44 @@
"""Abstract class for a service running on a server""" """Abstract class for a service running on a server"""
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from enum import Enum from typing import List, Optional
import typing from os.path import exists
from pydantic import BaseModel
from selfprivacy_api.jobs import Job
from selfprivacy_api import utils
from selfprivacy_api.services.config_item import ServiceConfigItem
from selfprivacy_api.utils.default_subdomains import DEFAULT_SUBDOMAINS
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.waitloop import wait_until_true
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
from selfprivacy_api.jobs import Job, Jobs, JobStatus, report_progress
from selfprivacy_api.jobs.upgrade_system import rebuild_system
from selfprivacy_api.models.services import ServiceStatus, ServiceDnsRecord
from selfprivacy_api.services.generic_size_counter import get_storage_usage from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.owned_path import OwnedPath from selfprivacy_api.services.owned_path import OwnedPath, Bind
from selfprivacy_api import utils from selfprivacy_api.services.moving import (
from selfprivacy_api.utils.waitloop import wait_until_true check_binds,
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain check_volume,
unbind_folders,
bind_folders,
ensure_folder_ownership,
MoveError,
move_data_to_volume,
)
DEFAULT_START_STOP_TIMEOUT = 5 * 60 DEFAULT_START_STOP_TIMEOUT = 5 * 60
class ServiceStatus(Enum):
"""Enum for service status"""
ACTIVE = "ACTIVE"
RELOADING = "RELOADING"
INACTIVE = "INACTIVE"
FAILED = "FAILED"
ACTIVATING = "ACTIVATING"
DEACTIVATING = "DEACTIVATING"
OFF = "OFF"
class ServiceDnsRecord(BaseModel):
type: str
name: str
content: str
ttl: int
display_name: str
priority: typing.Optional[int] = None
class Service(ABC): class Service(ABC):
""" """
Service here is some software that is hosted on the server and Service here is some software that is hosted on the server and
can be installed, configured and used by a user. can be installed, configured and used by a user.
""" """
config_items: dict[str, "ServiceConfigItem"] = {}
@staticmethod @staticmethod
@abstractmethod @abstractmethod
def get_id() -> str: def get_id() -> str:
@ -76,16 +71,31 @@ class Service(ABC):
""" """
pass pass
@staticmethod @classmethod
@abstractmethod def get_url(cls) -> Optional[str]:
def get_url() -> typing.Optional[str]:
""" """
The url of the service if it is accessible from the internet browser. The url of the service if it is accessible from the internet browser.
""" """
pass domain = get_domain()
subdomain = cls.get_subdomain()
return f"https://{subdomain}.{domain}"
@classmethod @classmethod
def get_user(cls) -> typing.Optional[str]: def get_subdomain(cls) -> Optional[str]:
"""
The assigned primary subdomain for this service.
"""
name = cls.get_id()
with ReadUserData() as user_data:
if "modules" in user_data:
if name in user_data["modules"]:
if "subdomain" in user_data["modules"][name]:
return user_data["modules"][name]["subdomain"]
return DEFAULT_SUBDOMAINS.get(name)
@classmethod
def get_user(cls) -> Optional[str]:
""" """
The user that owns the service's files. The user that owns the service's files.
Defaults to the service's id. Defaults to the service's id.
@ -93,13 +103,18 @@ class Service(ABC):
return cls.get_id() return cls.get_id()
@classmethod @classmethod
def get_group(cls) -> typing.Optional[str]: def get_group(cls) -> Optional[str]:
""" """
The group that owns the service's files. The group that owns the service's files.
Defaults to the service's user. Defaults to the service's user.
""" """
return cls.get_user() return cls.get_user()
@staticmethod
def is_always_active() -> bool:
"""`True` if the service cannot be stopped, which is true for api itself"""
return False
@staticmethod @staticmethod
@abstractmethod @abstractmethod
def is_movable() -> bool: def is_movable() -> bool:
@ -138,6 +153,16 @@ class Service(ABC):
with ReadUserData() as user_data: with ReadUserData() as user_data:
return user_data.get("modules", {}).get(name, {}).get("enable", False) return user_data.get("modules", {}).get(name, {}).get("enable", False)
@classmethod
def is_installed(cls) -> bool:
"""
`True` if the service is installed.
`False` if there is no module data in user data
"""
name = cls.get_id()
with ReadUserData() as user_data:
return user_data.get("modules", {}).get(name, {}) != {}
@staticmethod @staticmethod
@abstractmethod @abstractmethod
def get_status() -> ServiceStatus: def get_status() -> ServiceStatus:
@ -182,15 +207,24 @@ class Service(ABC):
"""Restart the service. Usually this means restarting systemd unit.""" """Restart the service. Usually this means restarting systemd unit."""
pass pass
@staticmethod @classmethod
@abstractmethod def get_configuration(cls):
def get_configuration(): return {
pass key: cls.config_items[key].as_dict(cls.get_id()) for key in cls.config_items
}
@staticmethod @classmethod
@abstractmethod def set_configuration(cls, config_items):
def set_configuration(config_items): for key, value in config_items.items():
pass if key not in cls.config_items:
raise ValueError(f"Key {key} is not valid for {cls.get_id()}")
if cls.config_items[key].validate_value(value) is False:
raise ValueError(f"Value {value} is not valid for {key}")
for key, value in config_items.items():
cls.config_items[key].set_value(
value,
cls.get_id(),
)
@staticmethod @staticmethod
@abstractmethod @abstractmethod
@ -209,10 +243,42 @@ class Service(ABC):
storage_used += get_storage_usage(folder) storage_used += get_storage_usage(folder)
return storage_used return storage_used
@staticmethod @classmethod
@abstractmethod def has_folders(cls) -> int:
def get_dns_records() -> typing.List[ServiceDnsRecord]: """
pass If there are no folders on disk, moving is noop
"""
for folder in cls.get_folders():
if exists(folder):
return True
return False
@classmethod
def get_dns_records(cls, ip4: str, ip6: Optional[str]) -> List[ServiceDnsRecord]:
subdomain = cls.get_subdomain()
display_name = cls.get_display_name()
if subdomain is None:
return []
dns_records = [
ServiceDnsRecord(
type="A",
name=subdomain,
content=ip4,
ttl=3600,
display_name=display_name,
)
]
if ip6 is not None:
dns_records.append(
ServiceDnsRecord(
type="AAAA",
name=subdomain,
content=ip6,
ttl=3600,
display_name=f"{display_name} (IPv6)",
)
)
return dns_records
@classmethod @classmethod
def get_drive(cls) -> str: def get_drive(cls) -> str:
@ -237,7 +303,7 @@ class Service(ABC):
return root_device return root_device
@classmethod @classmethod
def get_folders(cls) -> typing.List[str]: def get_folders(cls) -> List[str]:
""" """
get a plain list of occupied directories get a plain list of occupied directories
Default extracts info from overriden get_owned_folders() Default extracts info from overriden get_owned_folders()
@ -249,7 +315,7 @@ class Service(ABC):
return [owned_folder.path for owned_folder in cls.get_owned_folders()] return [owned_folder.path for owned_folder in cls.get_owned_folders()]
@classmethod @classmethod
def get_owned_folders(cls) -> typing.List[OwnedPath]: def get_owned_folders(cls) -> List[OwnedPath]:
""" """
Get a list of occupied directories with ownership info Get a list of occupied directories with ownership info
Default extracts info from overriden get_folders() Default extracts info from overriden get_folders()
@ -264,19 +330,151 @@ class Service(ABC):
def get_foldername(path: str) -> str: def get_foldername(path: str) -> str:
return path.split("/")[-1] return path.split("/")[-1]
@abstractmethod # TODO: with better json utils, it can be one line, and not a separate function
def move_to_volume(self, volume: BlockDevice) -> Job: @classmethod
"""Cannot raise errors. def set_location(cls, volume: BlockDevice):
Returns errors as an errored out Job instead.""" """
pass Only changes userdata
"""
service_id = cls.get_id()
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id]["location"] = volume.name
def binds(self) -> List[Bind]:
owned_folders = self.get_owned_folders()
return [
Bind.from_owned_path(folder, self.get_drive()) for folder in owned_folders
]
def assert_can_move(self, new_volume):
"""
Checks if the service can be moved to new volume
Raises errors if it cannot
"""
service_name = self.get_display_name()
if not self.is_movable():
raise MoveError(f"{service_name} is not movable")
with ReadUserData() as user_data:
if not user_data.get("useBinds", False):
raise MoveError("Server is not using binds.")
current_volume_name = self.get_drive()
if current_volume_name == new_volume.name:
raise MoveError(f"{service_name} is already on volume {new_volume}")
check_volume(new_volume, space_needed=self.get_storage_usage())
binds = self.binds()
if binds == []:
raise MoveError("nothing to move")
# It is ok if service is uninitialized, we will just reregister it
if self.has_folders():
check_binds(current_volume_name, binds)
def do_move_to_volume(
self,
new_volume: BlockDevice,
job: Job,
):
"""
Move a service to another volume.
Note: It may be much simpler to write it per bind, but a bit less safe?
"""
service_name = self.get_display_name()
binds = self.binds()
report_progress(10, job, "Unmounting folders from old volume...")
unbind_folders(binds)
report_progress(20, job, "Moving data to new volume...")
binds = move_data_to_volume(binds, new_volume, job)
report_progress(70, job, f"Making sure {service_name} owns its files...")
try:
ensure_folder_ownership(binds)
except Exception as error:
# We have logged it via print and we additionally log it here in the error field
# We are continuing anyway but Job has no warning field
Jobs.update(
job,
JobStatus.RUNNING,
error=f"Service {service_name} will not be able to write files: "
+ str(error),
)
report_progress(90, job, f"Mounting {service_name} data...")
bind_folders(binds)
report_progress(95, job, f"Finishing moving {service_name}...")
self.set_location(new_volume)
def move_to_volume(self, volume: BlockDevice, job: Job) -> Job:
service_name = self.get_display_name()
report_progress(0, job, "Performing pre-move checks...")
self.assert_can_move(volume)
if not self.has_folders():
self.set_location(volume)
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result=f"{service_name} moved successfully (no folders).",
status_text=f"NOT starting {service_name}",
progress=100,
)
return job
report_progress(5, job, f"Stopping {service_name}...")
assert self is not None
with StoppedService(self):
report_progress(9, job, "Stopped service, starting the move...")
self.do_move_to_volume(volume, job)
report_progress(98, job, "Move complete, rebuilding...")
rebuild_system(job, upgrade=False)
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result=f"{service_name} moved successfully.",
status_text=f"Starting {service_name}...",
progress=100,
)
return job
@classmethod @classmethod
def owned_path(cls, path: str): def owned_path(cls, path: str):
"""A default guess on folder ownership""" """Default folder ownership"""
service_name = cls.get_display_name()
try:
owner = cls.get_user()
if owner is None:
# TODO: assume root?
# (if we do not want to do assumptions, maybe not declare user optional?)
raise LookupError(f"no user for service: {service_name}")
group = cls.get_group()
if group is None:
raise LookupError(f"no group for service: {service_name}")
except Exception as error:
raise LookupError(
f"when deciding a bind for folder {path} of service {service_name}, error: {str(error)}"
)
return OwnedPath( return OwnedPath(
path=path, path=path,
owner=cls.get_user(), owner=owner,
group=cls.get_group(), group=group,
) )
def pre_backup(self): def pre_backup(self):
@ -305,11 +503,15 @@ class StoppedService:
def __enter__(self) -> Service: def __enter__(self) -> Service:
self.original_status = self.service.get_status() self.original_status = self.service.get_status()
if self.original_status not in [ServiceStatus.INACTIVE, ServiceStatus.FAILED]: if (
self.original_status not in [ServiceStatus.INACTIVE, ServiceStatus.FAILED]
and not self.service.is_always_active()
):
try: try:
self.service.stop() self.service.stop()
wait_until_true( wait_until_true(
lambda: self.service.get_status() == ServiceStatus.INACTIVE, lambda: self.service.get_status()
in [ServiceStatus.INACTIVE, ServiceStatus.FAILED],
timeout_sec=DEFAULT_START_STOP_TIMEOUT, timeout_sec=DEFAULT_START_STOP_TIMEOUT,
) )
except TimeoutError as error: except TimeoutError as error:
@ -319,7 +521,10 @@ class StoppedService:
return self.service return self.service
def __exit__(self, type, value, traceback): def __exit__(self, type, value, traceback):
if self.original_status in [ServiceStatus.ACTIVATING, ServiceStatus.ACTIVE]: if (
self.original_status in [ServiceStatus.ACTIVATING, ServiceStatus.ACTIVE]
and not self.service.is_always_active()
):
try: try:
self.service.start() self.service.start()
wait_until_true( wait_until_true(

View file

@ -0,0 +1,22 @@
from selfprivacy_api.services import Service
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import Job, Jobs, JobStatus
@huey.task()
def move_service(service: Service, new_volume: BlockDevice, job: Job) -> bool:
"""
Move service's folders to new physical volume
Does not raise exceptions (we cannot handle exceptions from tasks).
Reports all errors via job.
"""
try:
service.move_to_volume(new_volume, job)
except Exception as e:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error=type(e).__name__ + " " + str(e),
)
return True

View file

@ -1,18 +1,17 @@
"""Class representing Bitwarden service""" """Class representing Bitwarden service"""
import base64 import base64
import typing
import subprocess import subprocess
from typing import List from typing import List
from os import path from os import path
from pathlib import Path
# from enum import Enum # from enum import Enum
from selfprivacy_api.jobs import Job, Jobs, JobStatus from selfprivacy_api.jobs import Job
from selfprivacy_api.services.service import Service, ServiceDnsRecord, ServiceStatus from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.utils.block_devices import BlockDevice from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.generic_service_mover import move_service, FolderMoveNames
import selfprivacy_api.utils.network as network_utils
from selfprivacy_api.services.test_service.icon import BITWARDEN_ICON from selfprivacy_api.services.test_service.icon import BITWARDEN_ICON
@ -26,6 +25,7 @@ class DummyService(Service):
startstop_delay = 0.0 startstop_delay = 0.0
backuppable = True backuppable = True
movable = True movable = True
fail_to_stop = False
# if False, we try to actually move # if False, we try to actually move
simulate_moving = True simulate_moving = True
drive = "sda1" drive = "sda1"
@ -59,12 +59,6 @@ class DummyService(Service):
# return "" # return ""
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8") return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = "test.com"
return f"https://password.{domain}"
@classmethod @classmethod
def is_movable(cls) -> bool: def is_movable(cls) -> bool:
return cls.movable return cls.movable
@ -80,18 +74,31 @@ class DummyService(Service):
@classmethod @classmethod
def status_file(cls) -> str: def status_file(cls) -> str:
dir = cls.folders[0] dir = cls.folders[0]
# we do not REALLY want to store our state in our declared folders # We do not want to store our state in our declared folders
return path.join(dir, "..", "service_status") # Because they are moved and tossed in tests wildly
parent = Path(dir).parent
return path.join(parent, "service_status")
@classmethod @classmethod
def set_status(cls, status: ServiceStatus): def set_status(cls, status: ServiceStatus):
with open(cls.status_file(), "w") as file: with open(cls.status_file(), "w") as file:
status_string = file.write(status.value) file.write(status.value)
@classmethod @classmethod
def get_status(cls) -> ServiceStatus: def get_status(cls) -> ServiceStatus:
filepath = cls.status_file()
if filepath in [None, ""]:
raise ValueError(f"We do not have a path for our test dummy status file!")
if not path.exists(filepath):
raise FileNotFoundError(filepath)
with open(cls.status_file(), "r") as file: with open(cls.status_file(), "r") as file:
status_string = file.read().strip() status_string = file.read().strip()
if status_string in [None, ""]:
raise NotImplementedError(
f"It appears our test service no longer has any status in the statusfile. Filename = {cls.status_file}, status string inside is '{status_string}' (quoted) "
)
return ServiceStatus[status_string] return ServiceStatus[status_string]
@classmethod @classmethod
@ -99,16 +106,21 @@ class DummyService(Service):
cls, new_status: ServiceStatus, delay_sec: float cls, new_status: ServiceStatus, delay_sec: float
): ):
"""simulating a delay on systemd side""" """simulating a delay on systemd side"""
status_file = cls.status_file() if not isinstance(new_status, ServiceStatus):
raise ValueError(
f"received an invalid new status for test service. new status: {str(new_status)}"
)
if delay_sec == 0:
cls.set_status(new_status)
return
status_file = cls.status_file()
command = [ command = [
"bash", "bash",
"-c", "-c",
f" sleep {delay_sec} && echo {new_status.value} > {status_file}", f" sleep {delay_sec} && echo {new_status.value} > {status_file}",
] ]
handle = subprocess.Popen(command) subprocess.Popen(command)
if delay_sec == 0:
handle.communicate()
@classmethod @classmethod
def set_backuppable(cls, new_value: bool) -> None: def set_backuppable(cls, new_value: bool) -> None:
@ -141,14 +153,23 @@ class DummyService(Service):
when moved""" when moved"""
cls.simulate_moving = enabled cls.simulate_moving = enabled
@classmethod
def simulate_fail_to_stop(cls, value: bool):
cls.fail_to_stop = value
@classmethod @classmethod
def stop(cls): def stop(cls):
# simulate a failing service unable to stop # simulate a failing service unable to stop
if not cls.get_status() == ServiceStatus.FAILED: if not cls.get_status() == ServiceStatus.FAILED:
cls.set_status(ServiceStatus.DEACTIVATING) cls.set_status(ServiceStatus.DEACTIVATING)
cls.change_status_with_async_delay( if cls.fail_to_stop:
ServiceStatus.INACTIVE, cls.startstop_delay cls.change_status_with_async_delay(
) ServiceStatus.FAILED, cls.startstop_delay
)
else:
cls.change_status_with_async_delay(
ServiceStatus.INACTIVE, cls.startstop_delay
)
@classmethod @classmethod
def start(cls): def start(cls):
@ -160,12 +181,12 @@ class DummyService(Service):
cls.set_status(ServiceStatus.RELOADING) # is a correct one? cls.set_status(ServiceStatus.RELOADING) # is a correct one?
cls.change_status_with_async_delay(ServiceStatus.ACTIVE, cls.startstop_delay) cls.change_status_with_async_delay(ServiceStatus.ACTIVE, cls.startstop_delay)
@staticmethod @classmethod
def get_configuration(): def get_configuration(cls):
return {} return {}
@staticmethod @classmethod
def set_configuration(config_items): def set_configuration(cls, config_items):
return super().set_configuration(config_items) return super().set_configuration(config_items)
@staticmethod @staticmethod
@ -185,43 +206,9 @@ class DummyService(Service):
def get_folders(cls) -> List[str]: def get_folders(cls) -> List[str]:
return cls.folders return cls.folders
@staticmethod def do_move_to_volume(self, volume: BlockDevice, job: Job) -> Job:
def get_dns_records() -> typing.List[ServiceDnsRecord]:
"""Return list of DNS records for Bitwarden service."""
return [
ServiceDnsRecord(
type="A",
name="password",
content=network_utils.get_ip4(),
ttl=3600,
display_name="Test Service",
),
ServiceDnsRecord(
type="AAAA",
name="password",
content=network_utils.get_ip6(),
ttl=3600,
display_name="Test Service (IPv6)",
),
]
def move_to_volume(self, volume: BlockDevice) -> Job:
job = Jobs.add(
type_id=f"services.{self.get_id()}.move",
name=f"Move {self.get_display_name()}",
description=f"Moving {self.get_display_name()} data to {volume.name}",
)
if self.simulate_moving is False: if self.simulate_moving is False:
# completely generic code, TODO: make it the default impl. return super(DummyService, self).do_move_to_volume(volume, job)
move_service(
self,
volume,
job,
FolderMoveNames.default_foldermoves(self),
self.get_id(),
)
else: else:
Jobs.update(job, status=JobStatus.FINISHED) self.set_drive(volume.name)
return job
self.set_drive(volume.name)
return job

View file

@ -1,4 +1,14 @@
from os import environ
from selfprivacy_api.utils.huey import huey from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs.test import test_job
from selfprivacy_api.backup.tasks import * from selfprivacy_api.backup.tasks import *
from selfprivacy_api.services.generic_service_mover import move_service from selfprivacy_api.services.tasks import move_service
from selfprivacy_api.jobs.upgrade_system import rebuild_system_task
from selfprivacy_api.jobs.test import test_job
from selfprivacy_api.jobs.nix_collect_garbage import calculate_and_clear_dead_paths
if environ.get("TEST_MODE"):
from tests.test_huey import sum

View file

@ -8,6 +8,13 @@ import subprocess
import portalocker import portalocker
import typing import typing
from traceback import format_tb as format_traceback
from selfprivacy_api.utils.default_subdomains import (
DEFAULT_SUBDOMAINS,
RESERVED_SUBDOMAINS,
)
USERDATA_FILE = "/etc/nixos/userdata.json" USERDATA_FILE = "/etc/nixos/userdata.json"
SECRETS_FILE = "/etc/selfprivacy/secrets.json" SECRETS_FILE = "/etc/selfprivacy/secrets.json"
@ -133,6 +140,22 @@ def is_username_forbidden(username):
return False return False
def check_if_subdomain_is_taken(subdomain: str) -> bool:
"""Check if subdomain is already taken or reserved"""
if subdomain in RESERVED_SUBDOMAINS:
return True
with ReadUserData() as data:
for module in data["modules"]:
if (
data["modules"][module].get(
"subdomain", DEFAULT_SUBDOMAINS.get(module, "")
)
== subdomain
):
return True
return False
def parse_date(date_str: str) -> datetime.datetime: def parse_date(date_str: str) -> datetime.datetime:
"""Parse date string which can be in one of these formats: """Parse date string which can be in one of these formats:
- %Y-%m-%dT%H:%M:%S.%fZ - %Y-%m-%dT%H:%M:%S.%fZ
@ -199,3 +222,15 @@ def hash_password(password):
hashed_password = hashed_password.decode("ascii") hashed_password = hashed_password.decode("ascii")
hashed_password = hashed_password.rstrip() hashed_password = hashed_password.rstrip()
return hashed_password return hashed_password
def write_to_log(message):
with open("/etc/selfprivacy/log", "a") as log:
log.write(f"{datetime.datetime.now()} {message}\n")
log.flush()
os.fsync(log.fileno())
def pretty_error(e: Exception) -> str:
traceback = "/r".join(format_traceback(e.__traceback__))
return type(e).__name__ + ": " + str(e) + ": " + traceback

View file

@ -1,9 +1,12 @@
"""A block device API wrapping lsblk""" """A block device API wrapping lsblk"""
from __future__ import annotations from __future__ import annotations
import subprocess import subprocess
import json import json
import typing import typing
from pydantic import BaseModel
from selfprivacy_api.utils import WriteUserData from selfprivacy_api.utils import WriteUserData
from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass
@ -51,6 +54,7 @@ class BlockDevice:
def update_from_dict(self, device_dict: dict): def update_from_dict(self, device_dict: dict):
self.name = device_dict["name"] self.name = device_dict["name"]
self.path = device_dict["path"] self.path = device_dict["path"]
# TODO: maybe parse it as numbers, as in origin?
self.fsavail = str(device_dict["fsavail"]) self.fsavail = str(device_dict["fsavail"])
self.fssize = str(device_dict["fssize"]) self.fssize = str(device_dict["fssize"])
self.fstype = device_dict["fstype"] self.fstype = device_dict["fstype"]
@ -88,6 +92,14 @@ class BlockDevice:
def __hash__(self): def __hash__(self):
return hash(self.name) return hash(self.name)
def get_display_name(self) -> str:
if self.is_root():
return "System disk"
elif self.model == "Volume":
return "Expandable volume"
else:
return self.name
def is_root(self) -> bool: def is_root(self) -> bool:
""" """
Return True if the block device is the root device. Return True if the block device is the root device.
@ -169,6 +181,9 @@ class BlockDevice:
return False return False
# TODO: SingletonMetaclass messes with tests and is able to persist state
# between them. If you have very weird test crosstalk that's probably why
# I am not sure it NEEDS to be SingletonMetaclass
class BlockDevices(metaclass=SingletonMetaclass): class BlockDevices(metaclass=SingletonMetaclass):
"""Singleton holding all Block devices""" """Singleton holding all Block devices"""

View file

@ -0,0 +1,22 @@
DEFAULT_SUBDOMAINS = {
"bitwarden": "password",
"gitea": "git",
"jitsi-meet": "meet",
"simple-nixos-mailserver": "",
"nextcloud": "cloud",
"ocserv": "vpn",
"pleroma": "social",
"roundcube": "roundcube",
"testservice": "test",
"monitoring": "",
}
RESERVED_SUBDOMAINS = [
"admin",
"administrator",
"api",
"auth",
"user",
"users",
"ntfy",
]

View file

@ -0,0 +1,12 @@
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
)
def api_job_mutation_error(error: Exception, code: int = 400):
return GenericJobMutationReturn(
success=False,
code=code,
message=str(error),
job=None,
)

View file

@ -1,16 +1,25 @@
"""MiniHuey singleton.""" """MiniHuey singleton."""
import os
from huey import SqliteHuey
HUEY_DATABASE = "/etc/selfprivacy/tasks.db" from os import environ
from huey import RedisHuey
from selfprivacy_api.utils.redis_pool import RedisPool
HUEY_DATABASE_NUMBER = 10
def immediate() -> bool:
if environ.get("HUEY_QUEUES_FOR_TESTS"):
return False
if environ.get("TEST_MODE"):
return True
return False
# Singleton instance containing the huey database. # Singleton instance containing the huey database.
huey = RedisHuey(
test_mode = os.environ.get("TEST_MODE")
huey = SqliteHuey(
"selfprivacy-api", "selfprivacy-api",
filename=HUEY_DATABASE if not test_mode else None, url=RedisPool.connection_url(dbnumber=HUEY_DATABASE_NUMBER),
immediate=test_mode == "true", immediate=immediate(),
utc=True, utc=True,
) )

View file

@ -0,0 +1,429 @@
"""Prometheus monitoring queries."""
# pylint: disable=too-few-public-methods
import requests
import strawberry
from dataclasses import dataclass
from typing import Optional, Annotated, Union, List, Tuple
from datetime import datetime, timedelta
PROMETHEUS_URL = "http://localhost:9001"
@strawberry.type
@dataclass
class MonitoringValue:
timestamp: datetime
value: str
@strawberry.type
@dataclass
class MonitoringMetric:
metric_id: str
values: List[MonitoringValue]
@strawberry.type
class MonitoringQueryError:
error: str
@strawberry.type
class MonitoringValues:
values: List[MonitoringValue]
@strawberry.type
class MonitoringMetrics:
metrics: List[MonitoringMetric]
MonitoringValuesResult = Annotated[
Union[MonitoringValues, MonitoringQueryError],
strawberry.union("MonitoringValuesResult"),
]
MonitoringMetricsResult = Annotated[
Union[MonitoringMetrics, MonitoringQueryError],
strawberry.union("MonitoringMetricsResult"),
]
class MonitoringQueries:
@staticmethod
def _send_range_query(
query: str, start: int, end: int, step: int, result_type: Optional[str] = None
) -> Union[dict, MonitoringQueryError]:
try:
response = requests.get(
f"{PROMETHEUS_URL}/api/v1/query_range",
params={
"query": query,
"start": start,
"end": end,
"step": step,
},
timeout=0.8,
)
if response.status_code != 200:
return MonitoringQueryError(
error=f"Prometheus returned unexpected HTTP status code. Error: {response.text}. The query was {query}"
)
json = response.json()
if result_type and json["data"]["resultType"] != result_type:
return MonitoringQueryError(
error="Unexpected resultType returned from Prometheus, request failed"
)
return json["data"]
except Exception as error:
return MonitoringQueryError(
error=f"Prometheus request failed! Error: {str(error)}"
)
@staticmethod
def _send_query(
query: str, result_type: Optional[str] = None
) -> Union[dict, MonitoringQueryError]:
try:
response = requests.get(
f"{PROMETHEUS_URL}/api/v1/query",
params={
"query": query,
},
timeout=0.8,
)
if response.status_code != 200:
return MonitoringQueryError(
error=f"Prometheus returned unexpected HTTP status code. Error: {response.text}. The query was {query}"
)
json = response.json()
if result_type and json["data"]["resultType"] != result_type:
return MonitoringQueryError(
error="Unexpected resultType returned from Prometheus, request failed"
)
return json["data"]
except Exception as error:
return MonitoringQueryError(
error=f"Prometheus request failed! Error: {str(error)}"
)
@staticmethod
def _prometheus_value_to_monitoring_value(x: Tuple[int, str]):
return MonitoringValue(timestamp=datetime.fromtimestamp(x[0]), value=x[1])
@staticmethod
def _clean_slice_id(slice_id: str, clean_id: bool) -> str:
"""Slices come in form of `/slice_name.slice`, we need to remove the `.slice` and `/` part."""
if clean_id:
parts = slice_id.split(".")[0].split("/")
if len(parts) > 1:
return parts[1]
else:
raise ValueError(f"Incorrect format slice_id: {slice_id}")
return slice_id
@staticmethod
def _prometheus_response_to_monitoring_metrics(
response: dict, id_key: str, clean_id: bool = False
) -> List[MonitoringMetric]:
if response["resultType"] == "vector":
return list(
map(
lambda x: MonitoringMetric(
metric_id=MonitoringQueries._clean_slice_id(
x["metric"].get(id_key, "/unknown.slice"),
clean_id=clean_id,
),
values=[
MonitoringQueries._prometheus_value_to_monitoring_value(
x["value"]
)
],
),
response["result"],
)
)
else:
return list(
map(
lambda x: MonitoringMetric(
metric_id=MonitoringQueries._clean_slice_id(
x["metric"].get(id_key, "/unknown.slice"), clean_id=clean_id
),
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
x["values"],
)
),
),
response["result"],
)
)
@staticmethod
def _calculate_offset_and_duration(
start: datetime, end: datetime
) -> Tuple[int, int]:
"""Calculate the offset and duration for Prometheus queries.
They mast be in seconds.
"""
offset = int((datetime.now() - end).total_seconds())
duration = int((end - start).total_seconds())
return offset, duration
@staticmethod
def cpu_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringValuesResult:
"""
Get CPU information.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying disk usage data.
"""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = '100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)'
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringValues(
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
data["result"][0]["values"],
)
)
)
@staticmethod
def memory_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringValuesResult:
"""
Get memory usage.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying memory usage data.
"""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = "100 - (100 * (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes))"
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringValues(
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
data["result"][0]["values"],
)
)
)
@staticmethod
def memory_usage_max_by_slice(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
) -> MonitoringMetricsResult:
"""
Get maximum memory usage for each service (i.e. systemd slice).
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
"""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
offset, duration = MonitoringQueries._calculate_offset_and_duration(start, end)
if offset == 0:
query = f'max_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:])'
else:
query = f'max_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:] offset {offset}s)'
data = MonitoringQueries._send_query(query, result_type="vector")
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "id", clean_id=True
)
)
@staticmethod
def memory_usage_average_by_slice(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
) -> MonitoringMetricsResult:
"""
Get average memory usage for each service (i.e. systemd slice).
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
"""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
offset, duration = MonitoringQueries._calculate_offset_and_duration(start, end)
if offset == 0:
query = f'avg_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:])'
else:
query = f'avg_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:] offset {offset}s)'
data = MonitoringQueries._send_query(query, result_type="vector")
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "id", clean_id=True
)
)
@staticmethod
def disk_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringMetricsResult:
"""
Get disk usage information.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying disk usage data.
"""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = """100 - (100 * sum by (device) (node_filesystem_avail_bytes{fstype!="rootfs",fstype!="ramfs",fstype!="tmpfs",mountpoint!="/efi"}) / sum by (device) (node_filesystem_size_bytes{fstype!="rootfs",fstype!="ramfs",fstype!="tmpfs",mountpoint!="/efi"}))"""
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "device"
)
)
@staticmethod
def network_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringMetricsResult:
"""
Get network usage information for both download and upload.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying network data.
"""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = """
label_replace(rate(node_network_receive_bytes_total{device!="lo"}[5m]), "direction", "receive", "device", ".*")
or
label_replace(rate(node_network_transmit_bytes_total{device!="lo"}[5m]), "direction", "transmit", "device", ".*")
"""
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "direction"
)
)

View file

@ -2,6 +2,7 @@
"""Network utils""" """Network utils"""
import subprocess import subprocess
import re import re
import ipaddress
from typing import Optional from typing import Optional
@ -17,13 +18,15 @@ def get_ip4() -> str:
return ip4.group(1) if ip4 else "" return ip4.group(1) if ip4 else ""
def get_ip6() -> str: def get_ip6() -> Optional[str]:
"""Get IPv6 address""" """Get IPv6 address"""
try: try:
ip6 = subprocess.check_output(["ip", "addr", "show", "dev", "eth0"]).decode( ip6_addresses = subprocess.check_output(
"utf-8" ["ip", "addr", "show", "dev", "eth0"]
) ).decode("utf-8")
ip6 = re.search(r"inet6 (\S+)\/\d+", ip6) ip6_addresses = re.findall(r"inet6 (\S+)\/\d+", ip6_addresses)
for address in ip6_addresses:
if ipaddress.IPv6Address(address).is_global:
return address
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
ip6 = None return None
return ip6.group(1) if ip6 else ""

View file

@ -1,15 +1,23 @@
import uuid
from datetime import datetime from datetime import datetime
from typing import Optional from typing import Optional
from enum import Enum from enum import Enum
def store_model_as_hash(redis, redis_key, model): def store_model_as_hash(redis, redis_key, model):
for key, value in model.dict().items(): model_dict = model.dict()
for key, value in model_dict.items():
if isinstance(value, uuid.UUID):
value = str(value)
if isinstance(value, datetime): if isinstance(value, datetime):
value = value.isoformat() value = value.isoformat()
if isinstance(value, Enum): if isinstance(value, Enum):
value = value.value value = value.value
redis.hset(redis_key, key, str(value)) value = str(value)
model_dict[key] = value
redis.hset(redis_key, mapping=model_dict)
def hash_as_model(redis, redis_key: str, model_class): def hash_as_model(redis, redis_key: str, model_class):

View file

@ -1,32 +1,42 @@
""" """
Redis pool module for selfprivacy_api Redis pool module for selfprivacy_api
""" """
from os import environ
import redis import redis
from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass import redis.asyncio as redis_async
from redis.asyncio.client import PubSub
REDIS_SOCKET = "/run/redis-sp-api/redis.sock" REDIS_SOCKET = "/run/redis-sp-api/redis.sock"
class RedisPool(metaclass=SingletonMetaclass): class RedisPool:
""" """
Redis connection pool singleton. Redis connection pool singleton.
""" """
def __init__(self): def __init__(self):
if "USE_REDIS_PORT" in environ: self._dbnumber = 0
self._pool = redis.ConnectionPool( url = RedisPool.connection_url(dbnumber=self._dbnumber)
host="127.0.0.1", # We need a normal sync pool because otherwise
port=int(environ["USE_REDIS_PORT"]), # our whole API will need to be async
decode_responses=True, self._pool = redis.ConnectionPool.from_url(
) url,
decode_responses=True,
)
# We need an async pool for pubsub
self._async_pool = redis_async.ConnectionPool.from_url(
url,
decode_responses=True,
)
else: @staticmethod
self._pool = redis.ConnectionPool.from_url( def connection_url(dbnumber: int) -> str:
f"unix://{REDIS_SOCKET}", """
decode_responses=True, redis://[[username]:[password]]@localhost:6379/0
) unix://[username@]/path/to/socket.sock?db=0[&password=password]
self._pubsub_connection = self.get_connection() """
return f"unix://{REDIS_SOCKET}?db={dbnumber}"
def get_connection(self): def get_connection(self):
""" """
@ -34,8 +44,15 @@ class RedisPool(metaclass=SingletonMetaclass):
""" """
return redis.Redis(connection_pool=self._pool) return redis.Redis(connection_pool=self._pool)
def get_pubsub(self): def get_connection_async(self) -> redis_async.Redis:
""" """
Get a pubsub connection from the pool. Get an async connection from the pool.
Async connections allow pubsub.
""" """
return self._pubsub_connection.pubsub() return redis_async.Redis(connection_pool=self._async_pool)
async def subscribe_to_keys(self, pattern: str) -> PubSub:
async_redis = self.get_connection_async()
pubsub = async_redis.pubsub()
await pubsub.psubscribe(f"__keyspace@{self._dbnumber}__:" + pattern)
return pubsub

Some files were not shown because too many files have changed in this diff Show more