Compare commits

...

171 commits

Author SHA1 Message Date
Houkime
3ad80397af feature(backups): invalidate errored backups when a backup succeeds 2025-03-05 16:38:28 +00:00
Inex Code
043d280d53 feat: Dynamic templating ()
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/165
2024-12-24 19:04:31 +02:00
Inex Code
7d9150a77a
refactor: Temporarily disable CAA records as clients are not ready 2024-12-15 16:30:35 +03:00
Inex Code
8a672bab07 fix: API backups didn't backup userdata ()
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/164
2024-12-15 15:20:05 +02:00
Inex Code
a66ef79c3c
refactor: Do not return URL for API itself 2024-12-14 22:49:27 +03:00
Inex Code
25b6b9ca77
fix: SelfPrivacy API didn't load on system startup 2024-12-08 16:39:30 +03:00
houkime
bc45ced6ad Merge pull request 'fix(backups): do not use post_restore on backup' () from fix-backup-hooks into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/162
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-12-06 14:47:10 +02:00
Houkime
b7bf423b8f fix(backups): do not use post_restore on backup 2024-12-06 10:33:44 +00:00
Alan Urmancheev
5a92ad0621 feat: NextCloud: add the enableImagemagick option 2024-11-29 15:41:57 +02:00
Inex Code
c10e57b19c
fix: Wrong systemd dependency
Fixes 
2024-11-27 14:21:33 +03:00
Inex Code
6866226eae
fix: Systemd slice name 2024-11-27 14:12:36 +03:00
Inex Code
e2a0e4fc3d
fix: Fix user-facing SP API metadata 2024-11-27 14:08:26 +03:00
Inex Code
d91d8d2fd9
chore: Bump version to 3.4.0 2024-11-27 13:35:33 +03:00
nhnn
5aa1a378ef fix: free unused journal.Reader instances ()
I'm a bit dumb and forgot to add code that frees journal.Reader instances from memory.

Co-authored-by: nhnn <nhnn@disroot.org>
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/158
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: nhnn <nhnn@noreply.git.selfprivacy.org>
Co-committed-by: nhnn <nhnn@noreply.git.selfprivacy.org>
2024-11-27 12:32:01 +02:00
848befe3f1 feat: Use proper logging ()
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/154
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-10-23 14:38:01 +03:00
03d751e591 feat: add caa record ()
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/149
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-10-14 14:29:00 +03:00
11e020c0e1 fix: duplicate DNS records + add new test case 2024-10-04 14:27:40 +03:00
77eaa181ca feat: add _get_time_range 2024-10-02 15:35:05 +03:00
cb471f5a8e tests: add for swap 2024-10-02 15:35:05 +03:00
389ec2c81c feat: add swap usage query 2024-10-02 15:35:05 +03:00
Houkime
95a025d993 test(backup): unauthorized tests 2024-09-23 22:15:57 +03:00
Houkime
8e4e8c99b1 test(backup): total restore endpoint testing 2024-09-23 22:15:57 +03:00
Houkime
2ee66d143c fix(backup): early abort and better error reporting for restore_all 2024-09-23 22:15:57 +03:00
Houkime
3a33b2b486 test(services): utilities for checking and alterning testfiles 2024-09-23 22:15:57 +03:00
Houkime
4e1c8b6faa test(backup): total restore nocrash test 2024-09-23 22:15:57 +03:00
Houkime
39312a0937 test(services): refactor dummy service creation so that we can test restores more easily 2024-09-23 22:15:57 +03:00
Houkime
ca86e4fcc0 fix(backups): add rclone to environments 2024-09-23 18:29:50 +00:00
Houkime
faa4402030 chore(block devices): edit comment to be more correct 2024-09-13 12:31:30 +00:00
Inex Code
6340ad348c chore: Recover fixes destroyed by force push
Please don't do this again
2024-09-13 12:11:56 +00:00
Inex Code
63bcfa3077 chroe: string casing 2024-09-13 12:11:56 +00:00
Inex Code
d3e7eb44ea chore: Linting 2024-09-13 12:11:56 +00:00
Houkime
6eca44526a chore(services): clean up the config service 2024-09-13 12:11:56 +00:00
Houkime
408284a69f chore(backup): make a comment into a docstring 2024-09-13 12:11:56 +00:00
Houkime
5ea000baab feature(backups): manual autobackup -> total backup 2024-09-13 12:11:56 +00:00
Houkime
ee06d68047 feature(backups): allow non-autobackup slices for full restoration 2024-09-13 12:11:56 +00:00
Houkime
1a9a381753 refactor(backups): handle the case when there is no snapshot to sync date with 2024-09-13 12:11:56 +00:00
Houkime
53c6bc1af7 refactor(backups): cleanup old config service code 2024-09-13 12:11:56 +00:00
Houkime
0d23b91a37 refactor(backups): config service reformat 2024-09-13 12:11:56 +00:00
Houkime
27f09d04de fix(backups): change the dump folder 2024-09-13 12:11:56 +00:00
Houkime
b522c72aaf test(jobs): clean jobs properly 2024-09-13 12:11:56 +00:00
Houkime
b67777835d fix(backup): make last slice return a correct list 2024-09-13 12:11:56 +00:00
Houkime
a5b52c8f75 feature(backup): endpoint to force autobackup 2024-09-13 12:11:56 +00:00
Houkime
bb493e6b74 feature(backup): reload snapshots when migrating 2024-09-13 12:11:56 +00:00
Houkime
a4a70c07d3 test(backup): migration test 2024-09-13 12:11:56 +00:00
Houkime
427fdbdb49 test(backup): minimal snapshot slice test 2024-09-13 12:11:56 +00:00
Houkime
bfb0442e94 feature(backup): query to see restored snapshots in advance 2024-09-13 12:11:56 +00:00
Houkime
5e07a9eaeb feature(backup): error handling for the full restore endpoint 2024-09-13 12:11:56 +00:00
Houkime
7de5d26a81 feature(backup): full restore task 2024-09-13 12:11:56 +00:00
Houkime
be4e883b12 feature(backup): autobackup slice detection 2024-09-13 12:11:56 +00:00
Houkime
7ae550fd26 refactor(system): break out rebuild job creation 2024-09-13 12:11:56 +00:00
Houkime
f068329153 fix(service manager): debug and test backup hooks 2024-09-13 12:11:56 +00:00
Houkime
f8c6a8b9d6 refactor(utils): maybe make fsavail an int? 2024-09-13 12:11:56 +00:00
Houkime
af014e8b83 feature(backup): support for perma-active services and services with no existing data 2024-09-13 12:11:56 +00:00
Houkime
0329addd1f feature(services): add perma-active services (api itself) 2024-09-13 12:11:56 +00:00
Houkime
35e2e8cc78 test(dkim): separate dummy dkim into a folder 2024-09-13 12:11:56 +00:00
Houkime
c5c6d860fd test(secrets): add a dummy secrets file 2024-09-13 12:11:56 +00:00
Houkime
d4998ded46 refactor(services): migrate service management to a special service 2024-09-13 12:11:56 +00:00
Houkime
2ef674a037 refactor(services): PARTIAL migrate get_all_services 2024-09-13 12:11:56 +00:00
Houkime
f6151ee451 feature(backup): add migration specific endpoints 2024-09-13 12:11:56 +00:00
Houkime
8c44f78bbb feature(services): add config service 2024-09-13 12:11:56 +00:00
Houkime
f57eda5237 feature(services): allow moving uninitialized services 2024-09-13 12:11:56 +00:00
6afaefbb41 tests: fix nix_collect_garbage 2024-09-12 16:09:30 +04:00
Inex Code
e6b7a1c168 style: linting 2024-09-11 13:58:48 +03:00
Houkime
68d0ee8c5d test(system): dns migration 2024-09-11 13:58:48 +03:00
Houkime
77fb99d84e feature(system): dns migration 2024-09-11 13:58:48 +03:00
ac07090784 style: blacked 2024-09-05 15:57:27 +04:00
def
81d082ff2a fix: nix collect garbage 2024-09-05 14:54:58 +03:00
Houkime
8ef63eb90e fix(backups): cover the case when service fails to stop 2024-08-16 15:36:22 +03:00
391e4802b2 tests: add tests for monitoring ()
Co-authored-by: nhnn <nhnn@disroot.org>
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/140
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-08-16 15:36:07 +03:00
Houkime
55bbb0f3cc test(services): add more debug to the dummy service 2024-08-16 14:14:56 +03:00
Inex Code
1d31a29dce chore: Add bandit to dev shell 2024-08-12 21:53:44 +03:00
bbd909a544 feat: timeout for monitoring 2024-08-12 21:45:21 +03:00
Houkime
3c3b0f6be0 fix(backups): allow retrying when deleting service files 2024-08-12 19:45:51 +03:00
nhnn
1bfe7cf8dc fix: stop prosody when jitsi stops 2024-08-09 11:17:27 +03:00
4cd90d0c93 feat: add Prometheus monitoring ()
Co-authored-by: nhnn <nhnn@disroot.org>
Co-authored-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/120
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-07-30 16:55:57 +03:00
Inex Code
1259c081ef style: Reformat with new Black version 2024-07-26 22:59:44 +03:00
Inex Code
659cfca8a3 chore: Migrate to NixOS 24.05 2024-07-26 22:59:32 +03:00
Inex Code
9b93107b36 feat: Service configuration ()
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/127
2024-07-26 18:33:04 +03:00
Inex Code
40b8eb06d0 Merge pull request 'feat: add option to filter logs by unit or slice' () from nhnn/selfprivacy-rest-api:logs-filtering into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/128
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-07-26 16:33:05 +03:00
nhnn
3c024cb613 feat: add option to filter logs by unit or slice 2024-07-25 20:34:28 +03:00
Alexander Tomokhov
a00aae1bee fix: remove '-v' in pytest-vm 2024-07-15 17:00:26 +03:00
Inex Code
b510af725b Merge pull request 'feat: add roundcube service' () from def/selfprivacy-rest-api:master into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/119
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-07-15 16:45:46 +03:00
Inex Code
d18d644cec Merge remote-tracking branch 'origin/master' into roundcube 2024-07-15 17:30:59 +04:00
Inex Code
16d1f9f21a Merge pull request 'feat: graphql endpoint to fetch system logs' () from nhnn/selfprivacy-rest-api:api-logs into master
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/116
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
2024-07-15 16:23:30 +03:00
Inex Code
d8fe54e0e9 fix: do not use bare 'except' 2024-07-15 17:05:38 +04:00
Inex Code
5c5e098bab style: do not break line before logic operator 2024-07-15 17:02:34 +04:00
Inex Code
cc4b411657 refactor: Replace strawberry.types.Info with just Info 2024-07-15 16:59:27 +04:00
nhnn
94b0276f74 fix: extract business logic to utils/systemd_journal.py 2024-07-13 11:58:54 +03:00
Inex Code
c857678c9a docs: Update Contributing file 2024-07-11 20:20:08 +04:00
Inex Code
859ac4dbc6 chore: Update nixpkgs 2024-07-11 19:08:04 +04:00
Inex Code
4ca9b9f54e fix: Wait for ws logs test to init 2024-07-10 21:46:14 +04:00
Inex Code
faa8952e9c chore: Bump version to 3.3.0 2024-07-10 19:51:10 +04:00
Inex Code
5f3fc0d96e chore: formatting 2024-07-10 19:18:22 +04:00
Inex Code
9f5f0507e3 Merge remote-tracking branch 'origin/master' into api-logs 2024-07-10 18:52:10 +04:00
Inex Code
ceee6e4db9 fix: Read auth token from the connection initialization payload
Websockets do not provide headers, and sending a token as a query param is also not good (it gets into server's logs),
As an alternative, we can provide a token in the first ws payload.

Read more: https://strawberry.rocks/docs/general/subscriptions#authenticating-subscriptions
2024-07-05 18:14:18 +04:00
Inex Code
a7be03a6d3 refactor: Remove setting KEA
This is already done via NixOS config
2024-07-04 18:49:17 +04:00
Houkime
9accf861c5 fix(websockets): add websockets dep so that uvicorn works 2024-07-04 17:19:25 +03:00
Houkime
41f6d8b6d2 test(websocket): remove some duplication 2024-07-04 17:19:25 +03:00
Houkime
57378a7940 test(websocket): remove excessive sleeping 2024-07-04 17:19:25 +03:00
Houkime
05ffa036b3 refactor(jobs): offload job subscription logic to a separate file 2024-07-04 17:19:25 +03:00
Houkime
ccf71078b8 feature(websocket): add auth to counter too 2024-07-04 17:19:25 +03:00
Houkime
cb641e4f37 feature(websocket): add auth 2024-07-04 17:19:25 +03:00
Houkime
0fda29cdd7 test(devices): provide devices for a service test to fix conditional test fail. 2024-07-04 17:19:25 +03:00
Houkime
442538ee43 feature(jobs): UNSAFE endpoint to get job updates 2024-07-04 17:19:25 +03:00
Houkime
51ccde8b07 test(jobs): test simple counting 2024-07-04 17:19:25 +03:00
Houkime
cbe5c56270 chore(jobs): shorter typehints and import sorting 2024-07-04 17:19:25 +03:00
Houkime
ed777e3ebf feature(jobs): add subscription endpoint 2024-07-04 17:19:25 +03:00
Houkime
f14866bdbc test(websocket): separate ping and init 2024-07-04 17:19:25 +03:00
Houkime
a2a4b461e7 test(websocket): ping pong test 2024-07-04 17:19:25 +03:00
Houkime
9add0b1dc1 test(websocket) test connection init 2024-07-04 17:19:25 +03:00
Houkime
00c42d9660 test(jobs): subscription query generating function 2024-07-04 17:19:25 +03:00
Houkime
2d9f48650e test(jobs) test API job format 2024-07-04 17:19:25 +03:00
Houkime
c4aa757ca4 test(jobs): test Graphql job getting 2024-07-04 17:19:25 +03:00
Houkime
63d2e48a98 feature(jobs): websocket connection 2024-07-04 17:19:25 +03:00
Houkime
9bfffcd820 feature(jobs): job update generator 2024-07-04 17:19:25 +03:00
Houkime
6510d4cac6 feature(redis): enable key space notifications by default 2024-07-04 17:19:25 +03:00
Houkime
fff8a49992 refactoring(jobs): break out a function returning all jobs 2024-07-04 17:19:25 +03:00
Houkime
5558577927 test(redis): test key event notifications 2024-07-04 17:19:25 +03:00
Houkime
f08dc3ad23 test(async): pubsub 2024-07-04 17:19:25 +03:00
Houkime
94386fc53d chore(nixos): add pytest-asyncio 2024-07-04 17:19:25 +03:00
Houkime
b6118465a0 feature(redis): async connections 2024-07-04 17:19:25 +03:00
Inex Code
4066be38ec chore: Bump version to 3.2.2 2024-07-01 19:25:54 +04:00
Inex Code
7522c2d796 refactor: Change gitea to Forgejo 2024-06-30 23:02:07 +04:00
Inex Code
6e0bf4f2a3 chore: PR cleanup 2024-06-27 17:43:13 +03:00
Inex Code
c42e2ef3ac Revert "feat: move get_subdomain to parent class really"
This reverts commit 4eaefc8321.
2024-06-27 17:43:13 +03:00
Inex Code
8bb9166287 Revert "fix: remove get sub domain from services"
This reverts commit 46fd7a237c.
2024-06-27 17:43:13 +03:00
Inex Code
306b7f898d Revert "feat: rewrite get_url()"
This reverts commit f834c85401.
2024-06-27 17:43:13 +03:00
nhnn
f1cc84b8c8 fix: add migrations to migration list in migrations/__init__.py 2024-06-27 17:43:13 +03:00
02bc74f4c4 fix: only roundcube migration, other services removed 2024-06-27 17:43:13 +03:00
416a0a8725 fix: from review 2024-06-27 17:43:13 +03:00
82a0b557e1 feat: add migration for userdata 2024-06-27 17:43:13 +03:00
7b9420c244 feat: rewrite get_url() 2024-06-27 17:43:13 +03:00
9125d03b35 fix: remove get sub domain from services 2024-06-27 17:43:13 +03:00
2b9b81890b feat: move get_subdomain to parent class really 2024-06-27 17:43:13 +03:00
78dec5c347 feat: move get_subdomain to parent class 2024-06-27 17:43:13 +03:00
4d898f4ee8 feat: add migration for services flake 2024-06-27 17:43:13 +03:00
31feeb211d fix: change roundcube to webmail 2024-06-27 17:43:13 +03:00
a00c4d4268 fix: change return get_folders 2024-06-27 17:43:13 +03:00
9c50f8bba7 fix from review 2024-06-27 17:43:13 +03:00
1b91168d06 style: fix imports 2024-06-27 17:43:13 +03:00
4823491e3e feat: add roundcube service 2024-06-27 17:43:13 +03:00
Maxim Leshchenko
5602c96056 feat(services): rename "sda1" to "system disk" and etc ()
Closes 

Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/122
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Co-authored-by: Maxim Leshchenko <cnmaks90@gmail.com>
Co-committed-by: Maxim Leshchenko <cnmaks90@gmail.com>
2024-06-27 17:41:46 +03:00
f90eb3fb4c feat: add flake services manager ()
Reviewed-on: https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api/pulls/113
Reviewed-by: Inex Code <inex.code@selfprivacy.org>
Reviewed-by: houkime <houkime@protonmail.com>
Co-authored-by: dettlaff <dettlaff@riseup.net>
Co-committed-by: dettlaff <dettlaff@riseup.net>
2024-06-21 23:35:04 +03:00
nhnn
8b2e4666dd fix: rename PageMeta to LogsPageMeta 2024-06-11 12:36:42 +03:00
nhnn
3d2c79ecb1 feat: streaming of journald entries via graphql subscription 2024-06-06 16:07:08 +03:00
nhnn
fc2ac0fe6d feat: graphql endpoint to fetch system logs from journald 2024-06-06 16:03:16 +03:00
Houkime
cb2a1421bf test(websocket): remove some duplication 2024-05-27 21:30:51 +00:00
Houkime
17ae162156 test(websocket): remove excessive sleeping 2024-05-27 21:30:51 +00:00
Houkime
f772005b17 refactor(jobs): offload job subscription logic to a separate file 2024-05-27 21:30:51 +00:00
Houkime
950093a3b1 feature(websocket): add auth to counter too 2024-05-27 21:30:51 +00:00
Houkime
8fd12a1775 feature(websocket): add auth 2024-05-27 21:30:51 +00:00
Houkime
39f584ad5c test(devices): provide devices for a service test to fix conditional test fail. 2024-05-27 21:30:51 +00:00
Houkime
6d2fdab071 feature(jobs): UNSAFE endpoint to get job updates 2024-05-27 21:30:51 +00:00
Houkime
3910e416db test(jobs): test simple counting 2024-05-27 21:30:51 +00:00
Houkime
967e59271f chore(jobs): shorter typehints and import sorting 2024-05-27 21:30:51 +00:00
Houkime
3b0600efb6 feature(jobs): add subscription endpoint 2024-05-27 21:30:51 +00:00
Houkime
8348f11faf test(websocket): separate ping and init 2024-05-27 21:30:51 +00:00
Houkime
02d337c3f0 test(websocket): ping pong test 2024-05-27 21:30:51 +00:00
Houkime
c19fa227c9 test(websocket) test connection init 2024-05-27 21:30:51 +00:00
Houkime
098abd5149 test(jobs): subscription query generating function 2024-05-27 21:30:51 +00:00
Houkime
4306c94231 test(jobs) test API job format 2024-05-27 21:30:51 +00:00
Houkime
1fadf0214b test(jobs): test Graphql job getting 2024-05-27 21:30:51 +00:00
Houkime
4b1becb4e2 feature(jobs): websocket connection 2024-05-27 21:30:51 +00:00
Houkime
43980f16ea feature(jobs): job update generator 2024-05-27 21:30:51 +00:00
Houkime
b204d4a9b3 feature(redis): enable key space notifications by default 2024-05-27 21:30:51 +00:00
Houkime
8d099c9a22 refactoring(jobs): break out a function returning all jobs 2024-05-27 21:30:51 +00:00
Houkime
5bf5e7462f test(redis): test key event notifications 2024-05-27 21:30:51 +00:00
Houkime
4d60b7264a test(async): pubsub 2024-05-27 21:30:51 +00:00
Houkime
996cde15e1 chore(nixos): add pytest-asyncio 2024-05-27 21:30:51 +00:00
Houkime
862f85b8fd feature(redis): async connections 2024-05-27 21:30:51 +00:00
Inex Code
a742e66cc3 feat: Add "OTHER" as a server provider
We should allow manual SelfPrivacy installations on unsupported server providers. The ServerProvider enum is one of the gatekeepers that prevent this and we can change it easily as not much server-side logic rely on this.

The next step would be manual DNS management, but it would be much more involved than just adding the enum value.
2024-05-25 14:12:51 +03:00
133 changed files with 6896 additions and 1235 deletions
.vscode
CONTRIBUTING.mddefault.nixflake.lockflake.nix
nixos
selfprivacy_api

View file

@ -1,7 +1,4 @@
{
"python.formatting.provider": "black",
"python.linting.pylintEnabled": true,
"python.linting.enabled": true,
"python.testing.pytestArgs": [
"tests"
],
@ -9,4 +6,4 @@
"python.testing.pytestEnabled": true,
"python.languageServer": "Pylance",
"python.analysis.typeCheckingMode": "basic"
}
}

View file

@ -13,9 +13,9 @@ the [repository](https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api),
For detailed installation information, please review and follow: [link](https://nixos.org/manual/nix/stable/installation/installing-binary.html#installing-a-binary-distribution).
3. **Change directory to the cloned repository and start a nix shell:**
3. **Change directory to the cloned repository and start a nix development shell:**
```cd selfprivacy-rest-api && nix-shell```
```cd selfprivacy-rest-api && nix develop```
Nix will install all of the necessary packages for development work, all further actions will take place only within nix-shell.
@ -31,7 +31,7 @@ the [repository](https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api),
Copy the path that starts with ```/nix/store/``` and ends with ```env/bin/python```
```/nix/store/???-python3-3.9.??-env/bin/python```
```/nix/store/???-python3-3.10.??-env/bin/python```
Click on the python version selection in the lower right corner, and replace the path to the interpreter in the project with the one you copied from the terminal.
@ -43,12 +43,13 @@ the [repository](https://git.selfprivacy.org/SelfPrivacy/selfprivacy-rest-api),
## What to do after making changes to the repository?
**Run unit tests** using ```pytest .```
Make sure that all tests pass successfully and the API works correctly. For convenience, you can use the built-in VScode interface.
**Run unit tests** using ```pytest-vm``` inside of the development shell. This will run all the test inside a virtual machine, which is necessary for the tests to pass successfully.
Make sure that all tests pass successfully and the API works correctly.
How to review the percentage of code coverage? Execute the command:
The ```pytest-vm``` command will also print out the coverage of the tests. To export the report to an XML file, use the following command:
```coverage xml```
```coverage run -m pytest && coverage xml && coverage report```
Next, use the recommended extension ```ryanluker.vscode-coverage-gutters```, navigate to one of the test files, and click the "watch" button on the bottom panel of VScode.

View file

@ -14,10 +14,14 @@ pythonPackages.buildPythonPackage rec {
pydantic
pytz
redis
systemd
setuptools
strawberry-graphql
typing-extensions
uvicorn
requests
websockets
httpx
];
pythonImportsCheck = [ "selfprivacy_api" ];
doCheck = false;

6
flake.lock generated
View file

@ -2,11 +2,11 @@
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1709677081,
"narHash": "sha256-tix36Y7u0rkn6mTm0lA45b45oab2cFLqAzDbJxeXS+c=",
"lastModified": 1721949857,
"narHash": "sha256-DID446r8KsmJhbCzx4el8d9SnPiE8qa6+eEQOJ40vR0=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "880992dcc006a5e00dd0591446fdf723e6a51a64",
"rev": "a1cc729dcbc31d9b0d11d86dc7436163548a9665",
"type": "github"
},
"original": {

View file

@ -8,7 +8,7 @@
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
selfprivacy-graphql-api = pkgs.callPackage ./default.nix {
pythonPackages = pkgs.python310Packages;
pythonPackages = pkgs.python312Packages;
rev = self.shortRev or self.dirtyShortRev or "dirty";
};
python = self.packages.${system}.default.pythonModule;
@ -20,6 +20,7 @@
pytest-datadir
pytest-mock
pytest-subprocess
pytest-asyncio
black
mypy
pylsp-mypy
@ -39,6 +40,14 @@
black
nixpkgs-fmt
[linters]
bandit
CI uses the following command:
bandit -ll -r selfprivacy_api
mypy
pyflakes
[testing in NixOS VM]
nixos-test-driver - run an interactive NixOS VM with all dependencies included and 2 disk volumes
@ -65,7 +74,7 @@
SCRIPT=$(cat <<EOF
start_all()
machine.succeed("ln -sf $NIXOS_VM_SHARED_DIR_GUEST -T ${vmtest-src-dir} >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage run -m pytest -v $@ >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage run -m pytest $@ >&2")
machine.succeed("cd ${vmtest-src-dir} && coverage report >&2")
EOF
)
@ -84,8 +93,9 @@
packages = with pkgs; [
nixpkgs-fmt
rclone
redis
valkey
restic
bandit
self.packages.${system}.pytest-vm
# FIXME consider loading this explicitly only after ArchLinux issue is solved
self.checks.x86_64-linux.default.driverInteractive
@ -133,6 +143,7 @@
boot.consoleLogLevel = lib.mkForce 3;
documentation.enable = false;
services.journald.extraConfig = lib.mkForce "";
services.redis.package = pkgs.valkey;
services.redis.servers.sp-api = {
enable = true;
save = [ ];

View file

@ -5,6 +5,19 @@ let
config-id = "default";
nixos-rebuild = "${config.system.build.nixos-rebuild}/bin/nixos-rebuild";
nix = "${config.nix.package.out}/bin/nix";
sp-fetch-remote-module = pkgs.writeShellApplication {
name = "sp-fetch-remote-module";
runtimeInputs = [ config.nix.package.out ];
text = ''
if [ "$#" -ne 1 ]; then
echo "Usage: $0 <URL>"
exit 1
fi
URL="$1"
nix eval --file /etc/sp-fetch-remote-module.nix --raw --apply "f: f { flakeURL = \"$URL\"; }"
'';
};
in
{
options.services.selfprivacy-api = {
@ -41,18 +54,24 @@ in
pkgs.gitMinimal
config.nix.package.out
pkgs.restic
pkgs.rclone
pkgs.mkpasswd
pkgs.util-linux
pkgs.e2fsprogs
pkgs.iproute2
pkgs.postgresql_16.out
sp-fetch-remote-module
];
after = [ "network-online.target" ];
wantedBy = [ "network-online.target" ];
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
# Do not forget to edit Postgres identMap if you change the user!
User = "root";
ExecStart = "${selfprivacy-graphql-api}/bin/app.py";
Restart = "always";
RestartSec = "5";
Slice = "selfprivacy_api.slice";
};
};
systemd.services.selfprivacy-api-worker = {
@ -61,7 +80,7 @@ in
HOME = "/root";
PYTHONUNBUFFERED = "1";
PYTHONPATH =
pkgs.python310Packages.makePythonPath [ selfprivacy-graphql-api ];
pkgs.python312Packages.makePythonPath [ selfprivacy-graphql-api ];
} // config.networking.proxy.envVars;
path = [
"/var/"
@ -73,20 +92,30 @@ in
pkgs.gitMinimal
config.nix.package.out
pkgs.restic
pkgs.rclone
pkgs.mkpasswd
pkgs.util-linux
pkgs.e2fsprogs
pkgs.iproute2
pkgs.postgresql_16.out
sp-fetch-remote-module
];
after = [ "network-online.target" ];
wantedBy = [ "network-online.target" ];
wants = [ "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
# Do not forget to edit Postgres identMap if you change the user!
User = "root";
ExecStart = "${pkgs.python310Packages.huey}/bin/huey_consumer.py selfprivacy_api.task_registry.huey";
ExecStart = "${pkgs.python312Packages.huey}/bin/huey_consumer.py selfprivacy_api.task_registry.huey";
Restart = "always";
RestartSec = "5";
Slice = "selfprivacy_api.slice";
};
};
systemd.slices."selfprivacy_api" = {
name = "selfprivacy_api.slice";
description = "Slice for SelfPrivacy API services";
};
# One shot systemd service to rebuild NixOS using nixos-rebuild
systemd.services.sp-nixos-rebuild = {
description = "nixos-rebuild switch";
@ -107,7 +136,7 @@ in
ExecStart = ''
${nixos-rebuild} switch --flake .#${config-id}
'';
KillMode = "none";
KillMode = "mixed";
SendSIGKILL = "no";
};
restartIfChanged = false;
@ -134,7 +163,7 @@ in
ExecStart = ''
${nixos-rebuild} switch --flake .#${config-id}
'';
KillMode = "none";
KillMode = "mixed";
SendSIGKILL = "no";
};
restartIfChanged = false;
@ -156,7 +185,7 @@ in
ExecStart = ''
${nixos-rebuild} switch --rollback --flake .#${config-id}
'';
KillMode = "none";
KillMode = "mixed";
SendSIGKILL = "no";
};
restartIfChanged = false;

View file

@ -1,7 +1,8 @@
"""
App tokens actions.
App tokens actions.
The only actions on tokens that are accessible from APIs
"""
from datetime import datetime, timezone
from typing import Optional
from pydantic import BaseModel

View file

@ -1,7 +1,7 @@
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.jobs import Jobs, Job
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services.tasks import move_service as move_service_task
@ -14,7 +14,7 @@ class VolumeNotFoundError(Exception):
def move_service(service_id: str, volume_name: str) -> Job:
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
raise ServiceNotFoundError(f"No such service:{service_id}")
@ -27,7 +27,7 @@ def move_service(service_id: str, volume_name: str) -> Job:
job = Jobs.add(
type_id=f"services.{service.get_id()}.move",
name=f"Move {service.get_display_name()}",
description=f"Moving {service.get_display_name()} data to {volume.name}",
description=f"Moving {service.get_display_name()} data to {volume.get_display_name().lower()}",
)
move_service_task(service, volume, job)

View file

@ -1,4 +1,5 @@
"""Actions to manage the SSH."""
from typing import Optional
from pydantic import BaseModel
from selfprivacy_api.actions.users import (

View file

@ -1,13 +1,18 @@
"""Actions to manage the system."""
import os
import subprocess
import pytz
from typing import Optional, List
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.jobs.upgrade_system import rebuild_system_task
from selfprivacy_api.utils import WriteUserData, ReadUserData
from selfprivacy_api.utils import UserDataFiles
from selfprivacy_api.graphql.queries.providers import DnsProvider
def get_timezone() -> str:
@ -39,6 +44,18 @@ class UserDataAutoUpgradeSettings(BaseModel):
allowReboot: bool = False
def set_dns_provider(provider: DnsProvider, token: str):
with WriteUserData() as user_data:
if "dns" not in user_data.keys():
user_data["dns"] = {}
user_data["dns"]["provider"] = provider.value
with WriteUserData(file_type=UserDataFiles.SECRETS) as secrets:
if "dns" not in secrets.keys():
secrets["dns"] = {}
secrets["dns"]["apiKey"] = token
def get_auto_upgrade_settings() -> UserDataAutoUpgradeSettings:
"""Get the auto-upgrade settings"""
with ReadUserData() as user_data:
@ -48,14 +65,14 @@ def get_auto_upgrade_settings() -> UserDataAutoUpgradeSettings:
def set_auto_upgrade_settings(
enalbe: Optional[bool] = None, allowReboot: Optional[bool] = None
enable: Optional[bool] = None, allowReboot: Optional[bool] = None
) -> None:
"""Set the auto-upgrade settings"""
with WriteUserData() as user_data:
if "autoUpgrade" not in user_data:
user_data["autoUpgrade"] = {}
if enalbe is not None:
user_data["autoUpgrade"]["enable"] = enalbe
if enable is not None:
user_data["autoUpgrade"]["enable"] = enable
if allowReboot is not None:
user_data["autoUpgrade"]["allowReboot"] = allowReboot
@ -89,14 +106,18 @@ def run_blocking(cmd: List[str], new_session: bool = False) -> str:
return stdout
def rebuild_system() -> Job:
"""Rebuild the system"""
job = Jobs.add(
def add_rebuild_job() -> Job:
return Jobs.add(
type_id="system.nixos.rebuild",
name="Rebuild system",
description="Applying the new system configuration by building the new NixOS generation.",
status=JobStatus.CREATED,
)
def rebuild_system() -> Job:
"""Rebuild the system"""
job = add_rebuild_job()
rebuild_system_task(job)
return job

View file

@ -1,4 +1,5 @@
"""Actions to manage the users."""
import re
from typing import Optional
from pydantic import BaseModel

View file

@ -1,8 +1,12 @@
#!/usr/bin/env python3
"""SelfPrivacy server management API"""
import logging
import os
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from strawberry.fastapi import GraphQLRouter
from strawberry.subscriptions import GRAPHQL_TRANSPORT_WS_PROTOCOL, GRAPHQL_WS_PROTOCOL
import uvicorn
@ -11,10 +15,20 @@ from selfprivacy_api.graphql.schema import schema
from selfprivacy_api.migrations import run_migrations
log_level = os.getenv("LOG_LEVEL", "INFO").upper()
logging.basicConfig(
level=getattr(logging, log_level, logging.INFO), format="%(levelname)s: %(message)s"
)
app = FastAPI()
graphql_app = GraphQLRouter(
graphql_app: GraphQLRouter = GraphQLRouter(
schema,
subscription_protocols=[
GRAPHQL_TRANSPORT_WS_PROTOCOL,
GRAPHQL_WS_PROTOCOL,
],
)
app.add_middleware(

View file

@ -1,16 +1,16 @@
"""
This module contains the controller class for backups.
"""
from datetime import datetime, timedelta, timezone
import time
import os
from os import statvfs
from typing import Callable, List, Optional
from os.path import exists
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services import (
get_service_by_id,
get_all_services,
)
from selfprivacy_api.services.service import (
Service,
ServiceStatus,
@ -30,6 +30,7 @@ from selfprivacy_api.graphql.common_types.backup import (
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.backup.providers.provider import AbstractBackupProvider
from selfprivacy_api.backup.providers import get_provider
@ -40,6 +41,7 @@ from selfprivacy_api.backup.jobs import (
add_backup_job,
get_restore_job,
add_restore_job,
get_backup_fails,
)
@ -245,9 +247,10 @@ class Backups:
try:
if service.can_be_backed_up() is False:
raise ValueError("cannot backup a non-backuppable service")
folders = service.get_folders()
folders = service.get_folders_to_back_up()
service_name = service.get_id()
service.pre_backup()
service.pre_backup(job=job)
Jobs.update(job, status=JobStatus.RUNNING, status_text="Uploading backup")
snapshot = Backups.provider().backupper.start_backup(
folders,
service_name,
@ -257,16 +260,27 @@ class Backups:
Backups._on_new_snapshot_created(service_name, snapshot)
if reason == BackupReason.AUTO:
Backups._prune_auto_snaps(service)
service.post_restore()
service.post_backup(job=job)
except Exception as error:
Jobs.update(job, status=JobStatus.ERROR, error=str(error))
raise error
Jobs.update(job, status=JobStatus.FINISHED)
Jobs.update(job, status=JobStatus.FINISHED, result="Backup finished")
if reason in [BackupReason.AUTO, BackupReason.PRE_RESTORE]:
Jobs.set_expiration(job, AUTOBACKUP_JOB_EXPIRATION_SECONDS)
# To not confuse user
if reason is not BackupReason.PRE_RESTORE:
Backups.clear_failed_backups(service)
return Backups.sync_date_from_cache(snapshot)
@staticmethod
def clear_failed_backups(service: Service):
jobs_to_clear = get_backup_fails(service)
for job in jobs_to_clear:
Jobs.remove(job)
@staticmethod
def sync_date_from_cache(snapshot: Snapshot) -> Snapshot:
"""
@ -274,10 +288,16 @@ class Backups:
This is a convenience, maybe it is better to write a special comparison
function for snapshots
"""
return Storage.get_cached_snapshot_by_id(snapshot.id)
snap = Storage.get_cached_snapshot_by_id(snapshot.id)
if snap is None:
raise ValueError(
f"snapshot {snapshot.id} date syncing failed, this should never happen normally"
)
return snap
@staticmethod
def _auto_snaps(service):
def _auto_snaps(service) -> List[Snapshot]:
return [
snap
for snap in Backups.get_snapshots(service)
@ -375,7 +395,7 @@ class Backups:
@staticmethod
def prune_all_autosnaps() -> None:
for service in get_all_services():
for service in ServiceManager.get_all_services():
Backups._prune_auto_snaps(service)
# Restoring
@ -430,7 +450,7 @@ class Backups:
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
) -> None:
"""Restores a snapshot to its original service using the given strategy"""
service = get_service_by_id(snapshot.service_name)
service = ServiceManager.get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}"
@ -443,7 +463,9 @@ class Backups:
job, status=JobStatus.RUNNING, status_text="Stopping the service"
)
with StoppedService(service):
Backups.assert_dead(service)
if not service.is_always_active():
Backups.assert_dead(service)
service.pre_restore(job=job)
if strategy == RestoreStrategy.INPLACE:
Backups._inplace_restore(service, snapshot, job)
else: # verify_before_download is our default
@ -456,7 +478,7 @@ class Backups:
service, snapshot.id, verify=True
)
service.post_restore()
service.post_restore(job=job)
Jobs.update(
job,
status=JobStatus.RUNNING,
@ -474,7 +496,7 @@ class Backups:
def _assert_restorable(
snapshot: Snapshot, strategy=RestoreStrategy.DOWNLOAD_VERIFY_OVERWRITE
) -> None:
service = get_service_by_id(snapshot.service_name)
service = ServiceManager.get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(
f"snapshot has a nonexistent service: {snapshot.service_name}"
@ -507,7 +529,7 @@ class Backups:
snapshot_id: str,
verify=True,
) -> None:
folders = service.get_folders()
folders = service.get_folders_to_back_up()
Backups.provider().backupper.restore_from_backup(
snapshot_id,
@ -645,7 +667,7 @@ class Backups:
"""Returns a list of services that should be backed up at a given time"""
return [
service
for service in get_all_services()
for service in ServiceManager.get_all_services()
if Backups.is_time_to_backup_service(service, time)
]
@ -707,13 +729,23 @@ class Backups:
Returns the amount of space available on the volume the given
service is located on.
"""
folders = service.get_folders()
folders = service.get_folders_to_back_up()
if folders == []:
raise ValueError("unallocated service", service.get_id())
# We assume all folders of one service live at the same volume
fs_info = statvfs(folders[0])
usable_bytes = fs_info.f_frsize * fs_info.f_bavail
example_folder = folders[0]
if exists(example_folder):
fs_info = statvfs(example_folder)
usable_bytes = fs_info.f_frsize * fs_info.f_bavail
else:
# Look at the block device as it is written in settings
label = service.get_drive()
device = BlockDevices().get_block_device(label)
if device is None:
raise ValueError("nonexistent drive ", label, " for ", service.get_id())
usable_bytes = int(device.fsavail)
return usable_bytes
@staticmethod
@ -739,3 +771,52 @@ class Backups:
ServiceStatus.FAILED,
]:
raise NotDeadError(service)
@staticmethod
def is_same_slice(snap1: Snapshot, snap2: Snapshot) -> bool:
# Determines if the snaps were made roughly in the same time period
period_minutes = Backups.autobackup_period_minutes()
# Autobackups are not guaranteed to be enabled during restore.
# If they are not, period will be none
# We ASSUME that picking latest snap of the same day is safe enough
# But it is potentlially problematic and is better done with metadata I think.
if period_minutes is None:
period_minutes = 24 * 60
if snap1.created_at > snap2.created_at + timedelta(minutes=period_minutes):
return False
if snap1.created_at < snap2.created_at - timedelta(minutes=period_minutes):
return False
return True
@staticmethod
def last_backup_slice() -> List[Snapshot]:
"""
Guarantees that the slice is valid, ie, it has an api snapshot too
Or empty
"""
slice: List[Snapshot] = []
# We need snapshots that were made around the same time.
# And we need to be sure that api snap is in there
# That's why we form the slice around api snap
api_snaps = Backups.get_snapshots(ServiceManager())
if api_snaps == []:
return []
api_snaps.sort(key=lambda x: x.created_at, reverse=True)
api_snap = api_snaps[0] # pick the latest one
for service in ServiceManager.get_all_services():
if isinstance(service, ServiceManager):
continue
snaps = Backups.get_snapshots(service)
snaps.sort(key=lambda x: x.created_at, reverse=True)
for snap in snaps:
if Backups.is_same_slice(snap, api_snap):
slice.append(snap)
break
slice.append(api_snap)
return slice

View file

@ -4,28 +4,33 @@ import subprocess
import json
import datetime
import tempfile
import logging
import os
from typing import List, Optional, TypeVar, Callable
from collections.abc import Iterable
from json.decoder import JSONDecodeError
from os.path import exists, join
from os import mkdir
from os.path import exists, join, isfile, islink, isdir
from shutil import rmtree
from selfprivacy_api.utils.waitloop import wait_until_success
from selfprivacy_api.graphql.common_types.backup import BackupReason
from selfprivacy_api.backup.util import output_yielder, sync
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.backup.jobs import get_backup_job
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.backup.local_secret import LocalBackupSecret
SHORT_ID_LEN = 8
FILESYSTEM_TIMEOUT_SEC = 60
T = TypeVar("T", bound=Callable)
logger = logging.getLogger(__name__)
def unlocked_repo(func: T) -> T:
"""unlock repo and retry if it appears to be locked"""
@ -189,7 +194,7 @@ class ResticBackupper(AbstractBackupper):
@staticmethod
def _get_backup_job(service_name: str) -> Optional[Job]:
service = get_service_by_id(service_name)
service = ServiceManager.get_service_by_id(service_name)
if service is None:
raise ValueError("No service with id ", service_name)
@ -361,7 +366,27 @@ class ResticBackupper(AbstractBackupper):
parsed_output = ResticBackupper.parse_json_output(output)
return parsed_output["total_size"]
except ValueError as error:
raise ValueError("cannot restore a snapshot: " + output) from error
raise ValueError("Cannot restore a snapshot: " + output) from error
def _rm_all_folder_contents(self, folder: str) -> None:
"""
Remove all contents of a folder, including subfolders.
Raises:
ValueError: If it encounters an error while removing contents.
"""
try:
for filename in os.listdir(folder):
path = join(folder, filename)
try:
if isfile(path) or islink(path):
os.unlink(path)
elif isdir(path):
rmtree(path)
except Exception as error:
raise ValueError("Cannot remove folder contents: ", path) from error
except OSError as error:
raise ValueError("Cannot access folder: ", folder) from error
@unlocked_repo
def restore_from_backup(
@ -374,7 +399,7 @@ class ResticBackupper(AbstractBackupper):
Restore from backup with restic
"""
if folders is None or folders == []:
raise ValueError("cannot restore without knowing where to!")
raise ValueError("Cannot restore without knowing where to!")
with tempfile.TemporaryDirectory() as temp_dir:
if verify:
@ -391,8 +416,10 @@ class ResticBackupper(AbstractBackupper):
else: # attempting inplace restore
for folder in folders:
rmtree(folder)
mkdir(folder)
wait_until_success(
lambda: self._rm_all_folder_contents(folder),
timeout_sec=FILESYSTEM_TIMEOUT_SEC,
)
self._raw_verified_restore(snapshot_id, target="/")
return

View file

@ -1,9 +1,9 @@
from typing import Optional, List
from typing import Optional, List, Iterable
from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.jobs import Jobs, Job, JobStatus
from selfprivacy_api.services.service import Service
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager
def job_type_prefix(service: Service) -> str:
@ -67,40 +67,100 @@ def add_backup_job(service: Service) -> Job:
return job
def add_restore_job(snapshot: Snapshot) -> Job:
service = get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(f"no such service: {snapshot.service_name}")
if is_something_running_for(service):
message = (
f"Cannot start a restore of {service.get_id()}, another operation is running: "
+ get_jobs_by_service(service)[0].type_id
)
raise ValueError(message)
display_name = service.get_display_name()
def complain_about_service_operation_running(service: Service) -> str:
message = f"Cannot start a restore of {service.get_id()}, another operation is running: {get_jobs_by_service(service)[0].type_id}"
raise ValueError(message)
def add_total_restore_job() -> Job:
for service in ServiceManager.get_enabled_services():
ensure_nothing_runs_for(service)
job = Jobs.add(
type_id=restore_job_type(service),
name=f"Restore {display_name}",
description=f"restoring {display_name} from {snapshot.id}",
type_id="backups.total_restore",
name=f"Total restore",
description="Restoring all enabled services",
)
return job
def ensure_nothing_runs_for(service: Service):
if (
# TODO: try removing the exception. Why would we have it?
not isinstance(service, ServiceManager)
and is_something_running_for(service) is True
):
complain_about_service_operation_running(service)
def add_total_backup_job() -> Job:
for service in ServiceManager.get_enabled_services():
ensure_nothing_runs_for(service)
job = Jobs.add(
type_id="backups.total_backup",
name=f"Total backup",
description="Backing up all the enabled services",
)
return job
def add_restore_job(snapshot: Snapshot) -> Job:
service = ServiceManager.get_service_by_id(snapshot.service_name)
if service is None:
raise ValueError(f"no such service: {snapshot.service_name}")
if is_something_running_for(service):
complain_about_service_operation_running(service)
display_name = service.get_display_name()
job = Jobs.add(
type_id=restore_job_type(service),
name=f"Restore {display_name}",
description=f"Restoring {display_name} from {snapshot.id}",
)
return job
def last_if_any(jobs: List[Job]) -> Optional[Job]:
if not jobs:
return None
newest_jobs = sorted(jobs, key=lambda x: x.created_at, reverse=True)
return newest_jobs[0]
def get_job_by_type(type_id: str) -> Optional[Job]:
for job in Jobs.get_jobs():
if job.type_id == type_id and job.status in [
JobStatus.CREATED,
JobStatus.RUNNING,
]:
return job
return None
jobs = intersection(get_jobs_by_type(type_id), get_ok_jobs())
return last_if_any(jobs)
def get_failed_job_by_type(type_id: str) -> Optional[Job]:
for job in Jobs.get_jobs():
if job.type_id == type_id and job.status == JobStatus.ERROR:
return job
return None
jobs = intersection(get_jobs_by_type(type_id), get_failed_jobs())
return last_if_any(jobs)
def get_jobs_by_type(type_id: str):
return [job for job in Jobs.get_jobs() if job.type_id == type_id]
# Can be moved out to Jobs
def get_ok_jobs() -> List[Job]:
return [
job
for job in Jobs.get_jobs()
if job.status
in [
JobStatus.CREATED,
JobStatus.RUNNING,
]
]
# Can be moved out to Jobs
def get_failed_jobs() -> List[Job]:
return [job for job in Jobs.get_jobs() if job.status == JobStatus.ERROR]
def intersection(a: Iterable, b: Iterable):
return [x for x in a if x in b]
def get_backup_job(service: Service) -> Optional[Job]:
@ -111,5 +171,9 @@ def get_backup_fail(service: Service) -> Optional[Job]:
return get_failed_job_by_type(backup_job_type(service))
def get_backup_fails(service: Service) -> List[Job]:
return intersection(get_failed_jobs(), get_jobs_by_type(backup_job_type(service)))
def get_restore_job(service: Service) -> Optional[Job]:
return get_job_by_type(restore_job_type(service))

View file

@ -3,7 +3,8 @@ An abstract class for BackBlaze, S3 etc.
It assumes that while some providers are supported via restic/rclone, others
may require different backends
"""
from abc import ABC, abstractmethod
from abc import ABC
from selfprivacy_api.backup.backuppers import AbstractBackupper
from selfprivacy_api.graphql.queries.providers import (
BackupProvider as BackupProviderEnum,

View file

@ -1,6 +1,7 @@
"""
Module for storing backup related data in redis.
"""
from typing import List, Optional
from datetime import datetime

View file

@ -1,7 +1,9 @@
"""
The tasks module contains the worker tasks that are used to back up and restore
"""
from datetime import datetime, timezone
from typing import List
from selfprivacy_api.graphql.common_types.backup import (
RestoreStrategy,
@ -12,10 +14,12 @@ from selfprivacy_api.models.backup.snapshot import Snapshot
from selfprivacy_api.utils.huey import huey
from huey import crontab
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager, Service
from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.jobs import add_autobackup_job
from selfprivacy_api.jobs import Jobs, JobStatus, Job
from selfprivacy_api.jobs.upgrade_system import rebuild_system
from selfprivacy_api.actions.system import add_rebuild_job
SNAPSHOT_CACHE_TTL_HOURS = 6
@ -31,13 +35,21 @@ def validate_datetime(dt: datetime) -> bool:
return Backups.is_time_to_backup(dt)
def report_job_error(error: Exception, job: Job):
Jobs.update(
job,
status=JobStatus.ERROR,
error=type(error).__name__ + ": " + str(error),
)
# huey tasks need to return something
@huey.task()
def start_backup(service_id: str, reason: BackupReason = BackupReason.EXPLICIT) -> bool:
"""
The worker task that starts the backup process.
"""
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
raise ValueError(f"No such service: {service_id}")
Backups.back_up(service, reason)
@ -72,36 +84,10 @@ def restore_snapshot(
return True
def do_autobackup() -> None:
"""
Body of autobackup task, broken out to test it
For some reason, we cannot launch periodic huey tasks
inside tests
"""
time = datetime.utcnow().replace(tzinfo=timezone.utc)
services_to_back_up = Backups.services_to_back_up(time)
if not services_to_back_up:
return
job = add_autobackup_job(services_to_back_up)
progress_per_service = 100 // len(services_to_back_up)
progress = 0
Jobs.update(job, JobStatus.RUNNING, progress=progress)
for service in services_to_back_up:
try:
Backups.back_up(service, BackupReason.AUTO)
except Exception as error:
Jobs.update(
job,
status=JobStatus.ERROR,
error=type(error).__name__ + ": " + str(error),
)
return
progress = progress + progress_per_service
Jobs.update(job, JobStatus.RUNNING, progress=progress)
Jobs.update(job, JobStatus.FINISHED)
@huey.task()
def full_restore(job: Job) -> bool:
do_full_restore(job)
return True
@huey.periodic_task(validate_datetime=validate_datetime)
@ -112,6 +98,140 @@ def automatic_backup() -> None:
do_autobackup()
@huey.task()
def total_backup(job: Job) -> bool:
do_total_backup(job)
return True
@huey.periodic_task(crontab(hour="*/" + str(SNAPSHOT_CACHE_TTL_HOURS)))
def reload_snapshot_cache():
Backups.force_snapshot_cache_reload()
def back_up_multiple(
job: Job,
services_to_back_up: List[Service],
reason: BackupReason = BackupReason.EXPLICIT,
):
if services_to_back_up == []:
return
progress_per_service = 100 // len(services_to_back_up)
progress = 0
Jobs.update(job, JobStatus.RUNNING, progress=progress)
for service in services_to_back_up:
try:
Backups.back_up(service, reason)
except Exception as error:
report_job_error(error, job)
raise error
progress = progress + progress_per_service
Jobs.update(job, JobStatus.RUNNING, progress=progress)
def do_total_backup(job: Job) -> None:
"""
Body of total backup task, broken out to test it
"""
back_up_multiple(job, ServiceManager.get_enabled_services())
Jobs.update(job, JobStatus.FINISHED)
def do_autobackup() -> None:
"""
Body of autobackup task, broken out to test it
For some reason, we cannot launch periodic huey tasks
inside tests
"""
time = datetime.now(timezone.utc)
backups_were_disabled = Backups.autobackup_period_minutes() is None
if backups_were_disabled:
# Temporarily enable autobackup
Backups.set_autobackup_period_minutes(24 * 60) # 1 day
services_to_back_up = Backups.services_to_back_up(time)
if not services_to_back_up:
return
job = add_autobackup_job(services_to_back_up)
back_up_multiple(job, services_to_back_up, BackupReason.AUTO)
if backups_were_disabled:
Backups.set_autobackup_period_minutes(0)
Jobs.update(job, JobStatus.FINISHED)
# there is no point of returning the job
# this code is called with a delay
def eligible_for_full_restoration(snap: Snapshot):
service = ServiceManager.get_service_by_id(snap.service_name)
if service is None:
return False
if service.is_enabled() is False:
return False
return True
def which_snapshots_to_full_restore() -> list[Snapshot]:
autoslice = Backups.last_backup_slice()
api_snapshot = None
for snap in autoslice:
if snap.service_name == ServiceManager.get_id():
api_snapshot = snap
autoslice.remove(snap)
if api_snapshot is None:
raise ValueError(
"Cannot restore, no configuration snapshot found. This particular error should be unreachable"
)
snapshots_to_restore = [
snap for snap in autoslice if eligible_for_full_restoration(snap)
]
# API should be restored in the very end of the list because it requires rebuild right afterwards
snapshots_to_restore.append(api_snapshot)
return snapshots_to_restore
def do_full_restore(job: Job) -> None:
"""
Body full restore task, a part of server migration.
Broken out to test it independently from task infra
"""
Jobs.update(
job,
JobStatus.RUNNING,
status_text="Finding the last autobackup session",
progress=0,
)
snapshots_to_restore = which_snapshots_to_full_restore()
progress_per_service = 99 // len(snapshots_to_restore)
progress = 0
Jobs.update(job, JobStatus.RUNNING, progress=progress)
for snap in snapshots_to_restore:
try:
Backups.restore_snapshot(snap)
except Exception as error:
report_job_error(error, job)
return
progress = progress + progress_per_service
Jobs.update(
job,
JobStatus.RUNNING,
status_text=f"restoring {snap.service_name}",
progress=progress,
)
Jobs.update(job, JobStatus.RUNNING, status_text="rebuilding system", progress=99)
# Adding a separate job to not confuse the user with jumping progress bar
rebuild_job = add_rebuild_job()
rebuild_system(rebuild_job)
Jobs.update(job, JobStatus.FINISHED)

View file

@ -27,4 +27,4 @@ async def get_token_header(
def get_api_version() -> str:
"""Get API version"""
return "3.2.1"
return "3.5.0"

View file

@ -1,4 +1,5 @@
"""GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods
import typing
from strawberry.permission import BasePermission
@ -16,6 +17,10 @@ class IsAuthenticated(BasePermission):
token = info.context["request"].headers.get("Authorization")
if token is None:
token = info.context["request"].query_params.get("token")
if token is None:
connection_params = info.context.get("connection_params")
if connection_params is not None:
token = connection_params.get("Authorization")
if token is None:
return False
return is_token_valid(token.replace("Bearer ", ""))

View file

@ -1,4 +1,5 @@
"""Backup"""
# pylint: disable=too-few-public-methods
from enum import Enum
import strawberry

View file

@ -1,4 +1,5 @@
"""Jobs status"""
# pylint: disable=too-few-public-methods
import datetime
import typing

View file

@ -6,7 +6,8 @@ import strawberry
from selfprivacy_api.graphql.common_types.backup import BackupReason
from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.services import get_service_by_id, get_services_by_location
from selfprivacy_api.models.services import License
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services import Service as ServiceInterface
from selfprivacy_api.services import ServiceDnsRecord
@ -23,7 +24,7 @@ def get_usages(root: "StorageVolume") -> list["StorageUsageInterface"]:
used_space=str(service.get_storage_usage()),
volume=get_volume_by_id(service.get_drive()),
)
for service in get_services_by_location(root.name)
for service in ServiceManager.get_services_by_location(root.name)
]
@ -71,9 +72,31 @@ class ServiceStatusEnum(Enum):
OFF = "OFF"
@strawberry.enum
class SupportLevelEnum(Enum):
"""Enum representing the support level of a service."""
NORMAL = "normal"
EXPERIMENTAL = "experimental"
DEPRECATED = "deprecated"
COMMUNITY = "community"
UNKNOWN = "unknown"
@strawberry.experimental.pydantic.type(model=License)
class LicenseType:
free: strawberry.auto
full_name: strawberry.auto
redistributable: strawberry.auto
short_name: strawberry.auto
spdx_id: strawberry.auto
url: strawberry.auto
deprecated: strawberry.auto
def get_storage_usage(root: "Service") -> ServiceStorageUsage:
"""Get storage usage for a service"""
service = get_service_by_id(root.id)
service = ServiceManager.get_service_by_id(root.id)
if service is None:
return ServiceStorageUsage(
service=service,
@ -103,6 +126,69 @@ def service_dns_to_graphql(record: ServiceDnsRecord) -> DnsRecord:
)
@strawberry.interface
class ConfigItem:
field_id: str
description: str
widget: str
type: str
@strawberry.type
class StringConfigItem(ConfigItem):
value: str
default_value: str
regex: Optional[str]
@strawberry.type
class BoolConfigItem(ConfigItem):
value: bool
default_value: bool
@strawberry.type
class EnumConfigItem(ConfigItem):
value: str
default_value: str
options: list[str]
def config_item_to_graphql(item: dict) -> ConfigItem:
item_type = item.get("type")
if item_type == "string":
return StringConfigItem(
field_id=item["id"],
description=item["description"],
widget=item["widget"],
type=item_type,
value=item["value"],
default_value=item["default_value"],
regex=item.get("regex"),
)
elif item_type == "bool":
return BoolConfigItem(
field_id=item["id"],
description=item["description"],
widget=item["widget"],
type=item_type,
value=item["value"],
default_value=item["default_value"],
)
elif item_type == "enum":
return EnumConfigItem(
field_id=item["id"],
description=item["description"],
widget=item["widget"],
type=item_type,
value=item["value"],
default_value=item["default_value"],
options=item["options"],
)
else:
raise ValueError(f"Unknown config item type {item_type}")
@strawberry.type
class Service:
id: str
@ -112,14 +198,20 @@ class Service:
is_movable: bool
is_required: bool
is_enabled: bool
is_installed: bool
is_system_service: bool
can_be_backed_up: bool
backup_description: str
status: ServiceStatusEnum
url: Optional[str]
license: List[LicenseType]
homepage: Optional[str]
source_page: Optional[str]
support_level: SupportLevelEnum
@strawberry.field
def dns_records(self) -> Optional[List[DnsRecord]]:
service = get_service_by_id(self.id)
service = ServiceManager.get_service_by_id(self.id)
if service is None:
raise LookupError(f"no service {self.id}. Should be unreachable")
@ -132,6 +224,22 @@ class Service:
"""Get storage usage for a service"""
return get_storage_usage(self)
@strawberry.field
def configuration(self) -> Optional[List[ConfigItem]]:
"""Get service configuration"""
service = ServiceManager.get_service_by_id(self.id)
if service is None:
return None
config_items = service.get_configuration()
# If it is an empty dict, return none
if not config_items:
return None
# By the "type" field convert every dict into a ConfigItem. In the future there will be more types.
unsorted_config_items = [config_items[item] for item in config_items]
# Sort the items by their weight. If there is no weight, implicitly set it to 50.
config_items = sorted(unsorted_config_items, key=lambda x: x.get("weight", 50))
return [config_item_to_graphql(item) for item in config_items]
# TODO: fill this
@strawberry.field
def backup_snapshots(self) -> Optional[List["SnapshotInfo"]]:
@ -156,10 +264,18 @@ def service_to_graphql_service(service: ServiceInterface) -> Service:
is_movable=service.is_movable(),
is_required=service.is_required(),
is_enabled=service.is_enabled(),
is_installed=service.is_installed(),
can_be_backed_up=service.can_be_backed_up(),
backup_description=service.get_backup_description(),
status=ServiceStatusEnum(service.get_status().value),
url=service.get_url(),
is_system_service=service.is_system_service(),
license=[
LicenseType.from_pydantic(license) for license in service.get_license()
],
homepage=service.get_homepage(),
source_page=service.get_source_page(),
support_level=SupportLevelEnum(service.get_support_level().value),
)
@ -169,9 +285,9 @@ def get_volume_by_id(volume_id: str) -> Optional[StorageVolume]:
if volume is None:
return None
return StorageVolume(
total_space=str(volume.fssize)
if volume.fssize is not None
else str(volume.size),
total_space=(
str(volume.fssize) if volume.fssize is not None else str(volume.size)
),
free_space=str(volume.fsavail),
used_space=str(volume.fsused),
root=volume.name == "sda1",

View file

@ -1,4 +1,5 @@
"""API access mutations"""
# pylint: disable=too-few-public-methods
import datetime
import typing

View file

@ -1,6 +1,8 @@
import typing
import strawberry
from selfprivacy_api.utils.graphql import api_job_mutation_error
from selfprivacy_api.jobs import Jobs
from selfprivacy_api.graphql import IsAuthenticated
@ -19,13 +21,21 @@ from selfprivacy_api.graphql.common_types.backup import (
)
from selfprivacy_api.backup import Backups
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.backup.tasks import (
start_backup,
restore_snapshot,
prune_autobackup_snapshots,
full_restore,
total_backup,
)
from selfprivacy_api.backup.jobs import add_backup_job, add_restore_job
from selfprivacy_api.backup.jobs import (
add_backup_job,
add_restore_job,
add_total_restore_job,
add_total_backup_job,
)
from selfprivacy_api.backup.local_secret import LocalBackupSecret
@strawberry.input
@ -40,6 +50,8 @@ class InitializeRepositoryInput:
# Key ID and key for Backblaze
login: str
password: str
# For migration. If set, no new secret is generated
local_secret: typing.Optional[str] = None
@strawberry.type
@ -63,7 +75,13 @@ class BackupMutations:
location=repository.location_name,
repo_id=repository.location_id,
)
Backups.init_repo()
secret = repository.local_secret
if secret is not None:
LocalBackupSecret.set(secret)
Backups.force_snapshot_cache_reload()
else:
Backups.init_repo()
return GenericBackupConfigReturn(
success=True,
message="",
@ -138,7 +156,7 @@ class BackupMutations:
def start_backup(self, service_id: str) -> GenericJobMutationReturn:
"""Start backup"""
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
return GenericJobMutationReturn(
success=False,
@ -157,6 +175,50 @@ class BackupMutations:
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def total_backup(self) -> GenericJobMutationReturn:
"""Back up all the enabled services at once
Useful when migrating
"""
try:
job = add_total_backup_job()
total_backup(job)
except Exception as error:
return api_job_mutation_error(error)
return GenericJobMutationReturn(
success=True,
code=200,
message="Total backup task queued",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restore_all(self) -> GenericJobMutationReturn:
"""
Restore all restorable and enabled services according to last autobackup snapshots
This happens in sync with partial merging of old configuration for compatibility
"""
try:
job = add_total_restore_job()
full_restore(job)
except Exception as error:
return GenericJobMutationReturn(
success=False,
code=400,
message=str(error),
job=None,
)
return GenericJobMutationReturn(
success=True,
code=200,
message="restore job created",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restore_backup(
self,
@ -173,7 +235,7 @@ class BackupMutations:
job=None,
)
service = get_service_by_id(snap.service_name)
service = ServiceManager.get_service_by_id(snap.service_name)
if service is None:
return GenericJobMutationReturn(
success=False,

View file

@ -1,4 +1,5 @@
"""Manipulate jobs"""
# pylint: disable=too-few-public-methods
import strawberry

View file

@ -1,12 +1,15 @@
"""Services mutations"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.utils import pretty_error
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.jobs import JobStatus
from traceback import format_tb as format_traceback
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
@ -23,7 +26,7 @@ from selfprivacy_api.actions.services import (
VolumeNotFoundError,
)
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager
@strawberry.type
@ -33,6 +36,51 @@ class ServiceMutationReturn(GenericMutationReturn):
service: typing.Optional[Service] = None
@strawberry.input
class SetServiceConfigurationInput:
"""Set service configuration input type.
The values might be of different types: str or bool.
"""
service_id: str
configuration: strawberry.scalars.JSON
"""Yes, it is a JSON scalar, which is supposed to be a Map<str, Union[str, int, bool]>.
I can't define it as a proper type because GraphQL doesn't support unions in input types.
There is a @oneOf directive, but it doesn't fit this usecase.
Other option would have been doing something like this:
```python
@strawberry.type
class StringConfigurationInputField:
fieldId: str
value: str
@strawberry.type
class BoolConfigurationInputField:
fieldId: str
value: bool
// ...
@strawberry.input
class SetServiceConfigurationInput:
service_id: str
stringFields: List[StringConfigurationInputField]
boolFields: List[BoolConfigurationInputField]
enumFields: List[EnumConfigurationInputField]
intFields: List[IntConfigurationInputField]
```
But it would be very painful to maintain and will break compatibility with
every change.
Be careful when parsing it. Probably it will be wise to add a parser/validator
later when we get a new Pydantic integration in Strawberry.
-- Inex, 26.07.2024
"""
@strawberry.input
class MoveServiceInput:
"""Move service input type."""
@ -56,7 +104,7 @@ class ServicesMutations:
def enable_service(self, service_id: str) -> ServiceMutationReturn:
"""Enable service."""
try:
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
@ -82,7 +130,7 @@ class ServicesMutations:
def disable_service(self, service_id: str) -> ServiceMutationReturn:
"""Disable service."""
try:
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
@ -106,7 +154,7 @@ class ServicesMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def stop_service(self, service_id: str) -> ServiceMutationReturn:
"""Stop service."""
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
@ -124,7 +172,7 @@ class ServicesMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def start_service(self, service_id: str) -> ServiceMutationReturn:
"""Start service."""
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
@ -142,7 +190,7 @@ class ServicesMutations:
@strawberry.mutation(permission_classes=[IsAuthenticated])
def restart_service(self, service_id: str) -> ServiceMutationReturn:
"""Restart service."""
service = get_service_by_id(service_id)
service = ServiceManager.get_service_by_id(service_id)
if service is None:
return ServiceMutationReturn(
success=False,
@ -157,11 +205,46 @@ class ServicesMutations:
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def set_service_configuration(
self, input: SetServiceConfigurationInput
) -> ServiceMutationReturn:
"""Set the new configuration values"""
service = ServiceManager.get_service_by_id(input.service_id)
if service is None:
return ServiceMutationReturn(
success=False,
message=f"Service does not exist: {input.service_id}",
code=404,
)
try:
service.set_configuration(input.configuration)
return ServiceMutationReturn(
success=True,
message="Service configuration updated.",
code=200,
service=service_to_graphql_service(service),
)
except ValueError as e:
return ServiceMutationReturn(
success=False,
message=e.args[0],
code=400,
service=service_to_graphql_service(service),
)
except Exception as e:
return ServiceMutationReturn(
success=False,
message=pretty_error(e),
code=400,
service=service_to_graphql_service(service),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def move_service(self, input: MoveServiceInput) -> ServiceJobMutationReturn:
"""Move service."""
# We need a service instance for a reply later
service = get_service_by_id(input.service_id)
service = ServiceManager.get_service_by_id(input.service_id)
if service is None:
return ServiceJobMutationReturn(
success=False,
@ -210,8 +293,3 @@ class ServicesMutations:
service=service_to_graphql_service(service),
job=job_to_api_job(job),
)
def pretty_error(e: Exception) -> str:
traceback = "/r".join(format_traceback(e.__traceback__))
return type(e).__name__ + ": " + str(e) + ": " + traceback

View file

@ -1,4 +1,5 @@
"""Storage devices mutations"""
import strawberry
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job

View file

@ -1,20 +1,25 @@
"""System management mutations"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.utils import pretty_error
from selfprivacy_api.jobs.nix_collect_garbage import start_nix_collect_garbage
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.graphql.queries.providers import DnsProvider
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
GenericMutationReturn,
MutationReturnInterface,
GenericJobMutationReturn,
)
import selfprivacy_api.actions.system as system_actions
from selfprivacy_api.graphql.common_types.jobs import job_to_api_job
from selfprivacy_api.jobs.nix_collect_garbage import start_nix_collect_garbage
import selfprivacy_api.actions.ssh as ssh_actions
from selfprivacy_api.actions.system import set_dns_provider
@strawberry.type
@ -48,6 +53,14 @@ class SSHSettingsInput:
password_authentication: bool
@strawberry.input
class SetDnsProviderInput:
"""Input type to set the provider"""
provider: DnsProvider
api_token: str
@strawberry.input
class AutoUpgradeSettingsInput:
"""Input type for auto upgrade settings"""
@ -209,3 +222,20 @@ class SystemMutations:
message="Garbage collector started...",
job=job_to_api_job(job),
)
@strawberry.mutation(permission_classes=[IsAuthenticated])
def set_dns_provider(self, input: SetDnsProviderInput) -> GenericMutationReturn:
try:
set_dns_provider(input.provider, input.api_token)
return GenericMutationReturn(
success=True,
code=200,
message="Provider set",
)
except Exception as e:
return GenericMutationReturn(
success=False,
code=400,
message=pretty_error(e),
)

View file

@ -1,8 +1,10 @@
"""API access status"""
# pylint: disable=too-few-public-methods
import datetime
import typing
import strawberry
from strawberry.types import Info
from selfprivacy_api.actions.api_tokens import (
get_api_tokens_with_caller_flag,

View file

@ -1,4 +1,5 @@
"""Backup"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
@ -6,15 +7,18 @@ import strawberry
from selfprivacy_api.backup import Backups
from selfprivacy_api.backup.local_secret import LocalBackupSecret
from selfprivacy_api.backup.tasks import which_snapshots_to_full_restore
from selfprivacy_api.graphql.queries.providers import BackupProvider
from selfprivacy_api.graphql.common_types.service import (
Service,
ServiceStatusEnum,
SnapshotInfo,
SupportLevelEnum,
service_to_graphql_service,
)
from selfprivacy_api.graphql.common_types.backup import AutobackupQuotas
from selfprivacy_api.services import get_service_by_id
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.models.backup.snapshot import Snapshot
@strawberry.type
@ -49,6 +53,32 @@ def tombstone_service(service_id: str) -> Service:
url=None,
can_be_backed_up=False,
backup_description="",
is_installed=False,
homepage=None,
source_page=None,
license=[],
is_system_service=False,
support_level=SupportLevelEnum.UNKNOWN,
)
def snapshot_to_api(snap: Snapshot):
api_service = None
service = ServiceManager.get_service_by_id(snap.service_name)
if service is None:
api_service = tombstone_service(snap.service_name)
else:
api_service = service_to_graphql_service(service)
if api_service is None:
raise NotImplementedError(
f"Could not construct API Service record for:{snap.service_name}. This should be unreachable and is a bug if you see it."
)
return SnapshotInfo(
id=snap.id,
service=api_service,
created_at=snap.created_at,
reason=snap.reason,
)
@ -70,26 +100,15 @@ class Backup:
def all_snapshots(self) -> typing.List[SnapshotInfo]:
if not Backups.is_initted():
return []
result = []
snapshots = Backups.get_all_snapshots()
for snap in snapshots:
api_service = None
service = get_service_by_id(snap.service_name)
return [snapshot_to_api(snap) for snap in snapshots]
if service is None:
api_service = tombstone_service(snap.service_name)
else:
api_service = service_to_graphql_service(service)
if api_service is None:
raise NotImplementedError(
f"Could not construct API Service record for:{snap.service_name}. This should be unreachable and is a bug if you see it."
)
@strawberry.field
def last_slice(self) -> typing.List[SnapshotInfo]:
"""
A query for seeing which snapshots will be restored when migrating
"""
graphql_snap = SnapshotInfo(
id=snap.id,
service=api_service,
created_at=snap.created_at,
reason=snap.reason,
)
result.append(graphql_snap)
return result
if not Backups.is_initted():
return []
return [snapshot_to_api(snap) for snap in which_snapshots_to_full_restore()]

View file

@ -1,4 +1,5 @@
"""Common types and enums used by different types of queries."""
from enum import Enum
import datetime
import typing

View file

@ -1,24 +1,30 @@
"""Jobs status"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from typing import List, Optional
from selfprivacy_api.jobs import Jobs
from selfprivacy_api.graphql.common_types.jobs import (
ApiJob,
get_api_job_by_id,
job_to_api_job,
)
from selfprivacy_api.jobs import Jobs
def get_all_jobs() -> List[ApiJob]:
jobs = Jobs.get_jobs()
api_jobs = [job_to_api_job(job) for job in jobs]
assert api_jobs is not None
return api_jobs
@strawberry.type
class Job:
@strawberry.field
def get_jobs(self) -> typing.List[ApiJob]:
Jobs.get_jobs()
return [job_to_api_job(job) for job in Jobs.get_jobs()]
def get_jobs(self) -> List[ApiJob]:
return get_all_jobs()
@strawberry.field
def get_job(self, job_id: str) -> typing.Optional[ApiJob]:
def get_job(self, job_id: str) -> Optional[ApiJob]:
return get_api_job_by_id(job_id)

View file

@ -0,0 +1,99 @@
"""System logs"""
from datetime import datetime
import typing
import strawberry
from selfprivacy_api.utils.systemd_journal import get_paginated_logs
@strawberry.type
class LogEntry:
message: str = strawberry.field()
timestamp: datetime = strawberry.field()
priority: typing.Optional[int] = strawberry.field()
systemd_unit: typing.Optional[str] = strawberry.field()
systemd_slice: typing.Optional[str] = strawberry.field()
def __init__(self, journal_entry: typing.Dict):
self.entry = journal_entry
self.message = journal_entry["MESSAGE"]
self.timestamp = journal_entry["__REALTIME_TIMESTAMP"]
self.priority = journal_entry.get("PRIORITY")
self.systemd_unit = journal_entry.get("_SYSTEMD_UNIT")
self.systemd_slice = journal_entry.get("_SYSTEMD_SLICE")
@strawberry.field()
def cursor(self) -> str:
return self.entry["__CURSOR"]
@strawberry.type
class LogsPageMeta:
up_cursor: typing.Optional[str] = strawberry.field()
down_cursor: typing.Optional[str] = strawberry.field()
def __init__(
self, up_cursor: typing.Optional[str], down_cursor: typing.Optional[str]
):
self.up_cursor = up_cursor
self.down_cursor = down_cursor
@strawberry.type
class PaginatedEntries:
page_meta: LogsPageMeta = strawberry.field(
description="Metadata to aid in pagination."
)
entries: typing.List[LogEntry] = strawberry.field(
description="The list of log entries."
)
def __init__(self, meta: LogsPageMeta, entries: typing.List[LogEntry]):
self.page_meta = meta
self.entries = entries
@staticmethod
def from_entries(entries: typing.List[LogEntry]):
if entries == []:
return PaginatedEntries(LogsPageMeta(None, None), [])
return PaginatedEntries(
LogsPageMeta(
entries[0].cursor(),
entries[-1].cursor(),
),
entries,
)
@strawberry.type
class Logs:
@strawberry.field()
def paginated(
self,
limit: int = 20,
# All entries returned will be lesser than this cursor. Sets upper bound on results.
up_cursor: str | None = None,
# All entries returned will be greater than this cursor. Sets lower bound on results.
down_cursor: str | None = None,
# All entries will be from a specific systemd slice
filterBySlice: str | None = None,
# All entries will be from a specific systemd unit
filterByUnit: str | None = None,
) -> PaginatedEntries:
if limit > 50:
raise Exception("You can't fetch more than 50 entries via single request.")
return PaginatedEntries.from_entries(
list(
map(
lambda x: LogEntry(x),
get_paginated_logs(
limit,
up_cursor,
down_cursor,
filterBySlice,
filterByUnit,
),
)
)
)

View file

@ -0,0 +1,127 @@
import strawberry
from typing import Optional
from datetime import datetime
from selfprivacy_api.models.services import ServiceStatus
from selfprivacy_api.services.prometheus import Prometheus
from selfprivacy_api.utils.monitoring import (
MonitoringQueries,
MonitoringQueryError,
MonitoringValuesResult,
MonitoringMetricsResult,
)
@strawberry.type
class CpuMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringValuesResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.cpu_usage_overall(self.start, self.end, self.step)
@strawberry.type
class MemoryMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringValuesResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.memory_usage_overall(self.start, self.end, self.step)
@strawberry.field
def swap_usage_overall(self) -> MonitoringValuesResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.swap_usage_overall(self.start, self.end, self.step)
@strawberry.field
def average_usage_by_service(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.memory_usage_average_by_slice(self.start, self.end)
@strawberry.field
def max_usage_by_service(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.memory_usage_max_by_slice(self.start, self.end)
@strawberry.type
class DiskMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.disk_usage_overall(self.start, self.end, self.step)
@strawberry.type
class NetworkMonitoring:
start: Optional[datetime]
end: Optional[datetime]
step: int
@strawberry.field
def overall_usage(self) -> MonitoringMetricsResult:
if Prometheus().get_status() != ServiceStatus.ACTIVE:
return MonitoringQueryError(error="Prometheus is not running")
return MonitoringQueries.network_usage_overall(self.start, self.end, self.step)
@strawberry.type
class Monitoring:
@strawberry.field
def cpu_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> CpuMonitoring:
return CpuMonitoring(start=start, end=end, step=step)
@strawberry.field
def memory_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> MemoryMonitoring:
return MemoryMonitoring(start=start, end=end, step=step)
@strawberry.field
def disk_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> DiskMonitoring:
return DiskMonitoring(start=start, end=end, step=step)
@strawberry.field
def network_usage(
self,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60,
) -> NetworkMonitoring:
return NetworkMonitoring(start=start, end=end, step=step)

View file

@ -1,4 +1,5 @@
"""Enums representing different service providers."""
from enum import Enum
import strawberry
@ -14,6 +15,7 @@ class DnsProvider(Enum):
class ServerProvider(Enum):
HETZNER = "HETZNER"
DIGITALOCEAN = "DIGITALOCEAN"
OTHER = "OTHER"
@strawberry.enum

View file

@ -1,18 +1,23 @@
"""Services status"""
# pylint: disable=too-few-public-methods
import typing
import strawberry
from selfprivacy_api.graphql.common_types.service import (
Service,
service_to_graphql_service,
)
from selfprivacy_api.services import get_all_services
from selfprivacy_api.services import ServiceManager
@strawberry.type
class Services:
@strawberry.field
def all_services(self) -> typing.List[Service]:
services = get_all_services()
return [service_to_graphql_service(service) for service in services]
services = [
service_to_graphql_service(service)
for service in ServiceManager.get_all_services()
]
return sorted(services, key=lambda service: service.display_name)

View file

@ -1,4 +1,5 @@
"""Storage queries."""
# pylint: disable=too-few-public-methods
import typing
import strawberry
@ -18,9 +19,11 @@ class Storage:
"""Get list of volumes"""
return [
StorageVolume(
total_space=str(volume.fssize)
if volume.fssize is not None
else str(volume.size),
total_space=(
str(volume.fssize)
if volume.fssize is not None
else str(volume.size)
),
free_space=str(volume.fsavail),
used_space=str(volume.fsused),
root=volume.is_root(),

View file

@ -1,15 +1,17 @@
"""Common system information and settings"""
# pylint: disable=too-few-public-methods
import os
import typing
import strawberry
from selfprivacy_api.graphql.common_types.dns import DnsRecord
from selfprivacy_api.graphql.queries.common import Alert, Severity
from selfprivacy_api.graphql.queries.providers import DnsProvider, ServerProvider
from selfprivacy_api.jobs import Jobs
from selfprivacy_api.jobs.migrate_to_binds import is_bind_migrated
from selfprivacy_api.services import get_all_required_dns_records
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.utils import ReadUserData
import selfprivacy_api.actions.system as system_actions
import selfprivacy_api.actions.ssh as ssh_actions
@ -35,7 +37,7 @@ class SystemDomainInfo:
priority=record.priority,
display_name=record.display_name,
)
for record in get_all_required_dns_records()
for record in ServiceManager.get_all_required_dns_records()
]
@ -156,8 +158,8 @@ class System:
)
)
domain_info: SystemDomainInfo = strawberry.field(resolver=get_system_domain_info)
settings: SystemSettings = SystemSettings()
info: SystemInfo = SystemInfo()
settings: SystemSettings = strawberry.field(default_factory=SystemSettings)
info: SystemInfo = strawberry.field(default_factory=SystemInfo)
provider: SystemProviderInfo = strawberry.field(resolver=get_system_provider_info)
@strawberry.field

View file

@ -1,4 +1,5 @@
"""Users"""
# pylint: disable=too-few-public-methods
import typing
import strawberry

View file

@ -1,9 +1,12 @@
"""GraphQL API for SelfPrivacy."""
# pylint: disable=too-few-public-methods
import asyncio
from typing import AsyncGenerator
from typing import AsyncGenerator, List
import strawberry
from strawberry.types import Info
from selfprivacy_api.graphql import IsAuthenticated
from selfprivacy_api.graphql.mutations.deprecated_mutations import (
DeprecatedApiMutations,
@ -24,9 +27,23 @@ from selfprivacy_api.graphql.mutations.backup_mutations import BackupMutations
from selfprivacy_api.graphql.queries.api_queries import Api
from selfprivacy_api.graphql.queries.backup import Backup
from selfprivacy_api.graphql.queries.jobs import Job
from selfprivacy_api.graphql.queries.logs import LogEntry, Logs
from selfprivacy_api.graphql.queries.services import Services
from selfprivacy_api.graphql.queries.storage import Storage
from selfprivacy_api.graphql.queries.system import System
from selfprivacy_api.graphql.queries.monitoring import Monitoring
from selfprivacy_api.graphql.subscriptions.jobs import ApiJob
from selfprivacy_api.graphql.subscriptions.jobs import (
job_updates as job_update_generator,
)
from selfprivacy_api.graphql.subscriptions.logs import log_stream
from selfprivacy_api.graphql.common_types.service import (
StringConfigItem,
BoolConfigItem,
EnumConfigItem,
)
from selfprivacy_api.graphql.mutations.users_mutations import UsersMutations
from selfprivacy_api.graphql.queries.users import Users
@ -47,6 +64,11 @@ class Query:
"""System queries"""
return System()
@strawberry.field(permission_classes=[IsAuthenticated])
def logs(self) -> Logs:
"""Log queries"""
return Logs()
@strawberry.field(permission_classes=[IsAuthenticated])
def users(self) -> Users:
"""Users queries"""
@ -72,6 +94,11 @@ class Query:
"""Backup queries"""
return Backup()
@strawberry.field(permission_classes=[IsAuthenticated])
def monitoring(self) -> Monitoring:
"""Monitoring queries"""
return Monitoring()
@strawberry.type
class Mutation(
@ -129,22 +156,50 @@ class Mutation(
code=200,
)
pass
# A cruft for Websockets
def authenticated(info: Info) -> bool:
return IsAuthenticated().has_permission(source=None, info=info)
def reject_if_unauthenticated(info: Info):
if not authenticated(info):
raise Exception(IsAuthenticated().message)
@strawberry.type
class Subscription:
"""Root schema for subscriptions"""
"""Root schema for subscriptions.
Every field here should be an AsyncIterator or AsyncGenerator
It is not a part of the spec but graphql-core (dep of strawberryql)
demands it while the spec is vague in this area."""
@strawberry.subscription(permission_classes=[IsAuthenticated])
async def count(self, target: int = 100) -> AsyncGenerator[int, None]:
for i in range(target):
@strawberry.subscription
async def job_updates(self, info: Info) -> AsyncGenerator[List[ApiJob], None]:
reject_if_unauthenticated(info)
return job_update_generator()
@strawberry.subscription
# Used for testing, consider deletion to shrink attack surface
async def count(self, info: Info) -> AsyncGenerator[int, None]:
reject_if_unauthenticated(info)
for i in range(10):
yield i
await asyncio.sleep(0.5)
@strawberry.subscription
async def log_entries(self, info: Info) -> AsyncGenerator[LogEntry, None]:
reject_if_unauthenticated(info)
return log_stream()
schema = strawberry.Schema(
query=Query,
mutation=Mutation,
subscription=Subscription,
types=[
StringConfigItem,
BoolConfigItem,
EnumConfigItem,
],
)

View file

@ -0,0 +1,14 @@
# pylint: disable=too-few-public-methods
from typing import AsyncGenerator, List
from selfprivacy_api.jobs import job_notifications
from selfprivacy_api.graphql.common_types.jobs import ApiJob
from selfprivacy_api.graphql.queries.jobs import get_all_jobs
async def job_updates() -> AsyncGenerator[List[ApiJob], None]:
# Send the complete list of jobs every time anything gets updated
async for notification in job_notifications():
yield get_all_jobs()

View file

@ -0,0 +1,37 @@
from typing import AsyncGenerator
from systemd import journal
import asyncio
from selfprivacy_api.graphql.queries.logs import LogEntry
async def log_stream() -> AsyncGenerator[LogEntry, None]:
j = journal.Reader()
j.seek_tail()
j.get_previous()
queue = asyncio.Queue()
async def callback():
if j.process() != journal.APPEND:
return
for entry in j:
await queue.put(entry)
asyncio.get_event_loop().add_reader(j, lambda: asyncio.ensure_future(callback()))
try:
while True:
entry = await queue.get()
try:
yield LogEntry(entry)
except Exception:
asyncio.get_event_loop().remove_reader(j)
j.close()
return
queue.task_done()
except asyncio.CancelledError:
asyncio.get_event_loop().remove_reader(j)
j.close()
return

View file

@ -14,7 +14,9 @@ A job is a dictionary with the following keys:
- error: error message if the job failed
- result: result of the job
"""
import typing
import asyncio
import datetime
from uuid import UUID
import uuid
@ -23,6 +25,7 @@ from enum import Enum
from pydantic import BaseModel
from selfprivacy_api.utils.redis_pool import RedisPool
from selfprivacy_api.utils.redis_model_storage import store_model_as_hash
JOB_EXPIRATION_SECONDS = 10 * 24 * 60 * 60 # ten days
@ -102,7 +105,7 @@ class Jobs:
result=None,
)
redis = RedisPool().get_connection()
_store_job_as_hash(redis, _redis_key_from_uuid(job.uid), job)
store_model_as_hash(redis, _redis_key_from_uuid(job.uid), job)
return job
@staticmethod
@ -218,7 +221,7 @@ class Jobs:
redis = RedisPool().get_connection()
key = _redis_key_from_uuid(job.uid)
if redis.exists(key):
_store_job_as_hash(redis, key, job)
store_model_as_hash(redis, key, job)
if status in (JobStatus.FINISHED, JobStatus.ERROR):
redis.expire(key, JOB_EXPIRATION_SECONDS)
@ -294,17 +297,6 @@ def _progress_log_key_from_uuid(uuid_string) -> str:
return PROGRESS_LOGS_PREFIX + str(uuid_string)
def _store_job_as_hash(redis, redis_key, model) -> None:
for key, value in model.dict().items():
if isinstance(value, uuid.UUID):
value = str(value)
if isinstance(value, datetime.datetime):
value = value.isoformat()
if isinstance(value, JobStatus):
value = value.value
redis.hset(redis_key, key, str(value))
def _job_from_hash(redis, redis_key) -> typing.Optional[Job]:
if redis.exists(redis_key):
job_dict = redis.hgetall(redis_key)
@ -321,3 +313,15 @@ def _job_from_hash(redis, redis_key) -> typing.Optional[Job]:
return Job(**job_dict)
return None
async def job_notifications() -> typing.AsyncGenerator[dict, None]:
channel = await RedisPool().subscribe_to_keys("jobs:*")
while True:
try:
# we cannot timeout here because we do not know when the next message is supposed to arrive
message: dict = await channel.get_message(ignore_subscribe_messages=True, timeout=None) # type: ignore
if message is not None:
yield message
except GeneratorExit:
break

View file

@ -1,19 +1,20 @@
"""Function to perform migration of app data to binds."""
import subprocess
import pathlib
import shutil
import logging
from pydantic import BaseModel
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services import ServiceManager
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.utils.block_devices import BlockDevices
logger = logging.getLogger(__name__)
class BindMigrationConfig(BaseModel):
"""Config for bind migration.
@ -68,7 +69,7 @@ def move_folder(
try:
data_path.mkdir(mode=0o750, parents=True, exist_ok=True)
except Exception as error:
print(f"Error creating data path: {error}")
logging.error(f"Error creating data path: {error}")
return
try:
@ -80,12 +81,12 @@ def move_folder(
try:
subprocess.run(["mount", "--bind", str(bind_path), str(data_path)], check=True)
except subprocess.CalledProcessError as error:
print(error)
logging.error(error)
try:
subprocess.run(["chown", "-R", f"{user}:{group}", str(data_path)], check=True)
except subprocess.CalledProcessError as error:
print(error)
logging.error(error)
@huey.task()
@ -101,6 +102,50 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
)
return
Jobs.update(
job=job,
status=JobStatus.RUNNING,
progress=0,
status_text="Checking if services are present.",
)
nextcloud_service = ServiceManager.get_service_by_id("nextcloud")
bitwarden_service = ServiceManager.get_service_by_id("bitwarden")
gitea_service = ServiceManager.get_service_by_id("gitea")
pleroma_service = ServiceManager.get_service_by_id("pleroma")
if not nextcloud_service:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Nextcloud service not found.",
)
return
if not bitwarden_service:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Bitwarden service not found.",
)
return
if not gitea_service:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Gitea service not found.",
)
return
if not pleroma_service:
Jobs.update(
job=job,
status=JobStatus.ERROR,
error="Pleroma service not found.",
)
return
Jobs.update(
job=job,
status=JobStatus.RUNNING,
@ -168,7 +213,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
status_text="Migrating Nextcloud.",
)
Nextcloud().stop()
nextcloud_service.stop()
# If /volumes/sda1/nextcloud or /volumes/sdb/nextcloud exists, skip it.
if not pathlib.Path("/volumes/sda1/nextcloud").exists():
@ -183,7 +228,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
)
# Start Nextcloud
Nextcloud().start()
nextcloud_service.start()
# Perform migration of Bitwarden
@ -194,7 +239,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
status_text="Migrating Bitwarden.",
)
Bitwarden().stop()
bitwarden_service.stop()
if not pathlib.Path("/volumes/sda1/bitwarden").exists():
if not pathlib.Path("/volumes/sdb/bitwarden").exists():
@ -219,7 +264,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
)
# Start Bitwarden
Bitwarden().start()
bitwarden_service.start()
# Perform migration of Gitea
@ -230,7 +275,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
status_text="Migrating Gitea.",
)
Gitea().stop()
gitea_service.stop()
if not pathlib.Path("/volumes/sda1/gitea").exists():
if not pathlib.Path("/volumes/sdb/gitea").exists():
@ -241,7 +286,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
group="gitea",
)
Gitea().start()
gitea_service.start()
# Perform migration of Mail server
@ -283,7 +328,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
status_text="Migrating Pleroma.",
)
Pleroma().stop()
pleroma_service.stop()
if not pathlib.Path("/volumes/sda1/pleroma").exists():
if not pathlib.Path("/volumes/sdb/pleroma").exists():
@ -307,7 +352,7 @@ def migrate_to_binds(config: BindMigrationConfig, job: Job):
group="postgres",
)
Pleroma().start()
pleroma_service.start()
Jobs.update(
job=job,

View file

@ -21,7 +21,13 @@ CLEAR_COMPLETED = "Garbage collection completed."
def delete_old_gens_and_return_dead_report() -> str:
subprocess.run(
["nix-env", "-p", "/nix/var/nix/profiles/system", "--delete-generations old"],
[
"nix-env",
"-p",
"/nix/var/nix/profiles/system",
"--delete-generations",
"old",
],
check=False,
)

View file

@ -3,6 +3,7 @@ A task to start the system upgrade or rebuild by starting a systemd unit.
After starting, track the status of the systemd unit and update the Job
status accordingly.
"""
import subprocess
from selfprivacy_api.utils.huey import huey
from selfprivacy_api.jobs import JobStatus, Jobs, Job

View file

@ -9,15 +9,23 @@ with IDs of the migrations to skip.
Adding DISABLE_ALL to that array disables the migrations module entirely.
"""
import logging
from selfprivacy_api.utils import ReadUserData, UserDataFiles
from selfprivacy_api.migrations.write_token_to_redis import WriteTokenToRedis
from selfprivacy_api.migrations.check_for_system_rebuild_jobs import (
CheckForSystemRebuildJobs,
)
from selfprivacy_api.migrations.add_roundcube import AddRoundcube
from selfprivacy_api.migrations.add_monitoring import AddMonitoring
logger = logging.getLogger(__name__)
migrations = [
WriteTokenToRedis(),
CheckForSystemRebuildJobs(),
AddMonitoring(),
AddRoundcube(),
]
@ -43,6 +51,6 @@ def run_migrations():
if migration.is_migration_needed():
migration.migrate()
except Exception as err:
print(f"Error while migrating {migration.get_migration_name()}")
print(err)
print("Skipping this migration")
logging.error(f"Error while migrating {migration.get_migration_name()}")
logging.error(err)
logging.error("Skipping this migration")

View file

@ -0,0 +1,37 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.services.flake_service_manager import FlakeServiceManager
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.utils.block_devices import BlockDevices
class AddMonitoring(Migration):
"""Adds monitoring service if it is not present."""
def get_migration_name(self) -> str:
return "add_monitoring"
def get_migration_description(self) -> str:
return "Adds the Monitoring if it is not present."
def is_migration_needed(self) -> bool:
with FlakeServiceManager() as manager:
if "monitoring" not in manager.services:
return True
with ReadUserData() as data:
if "monitoring" not in data["modules"]:
return True
return False
def migrate(self) -> None:
with FlakeServiceManager() as manager:
if "monitoring" not in manager.services:
manager.services["monitoring"] = (
"git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes&dir=sp-modules/monitoring"
)
with WriteUserData() as data:
if "monitoring" not in data["modules"]:
data["modules"]["monitoring"] = {
"enable": True,
"location": BlockDevices().get_root_block_device().name,
}

View file

@ -0,0 +1,27 @@
from selfprivacy_api.migrations.migration import Migration
from selfprivacy_api.services.flake_service_manager import FlakeServiceManager
from selfprivacy_api.utils import ReadUserData, WriteUserData
class AddRoundcube(Migration):
"""Adds the Roundcube if it is not present."""
def get_migration_name(self) -> str:
return "add_roundcube"
def get_migration_description(self) -> str:
return "Adds the Roundcube if it is not present."
def is_migration_needed(self) -> bool:
with FlakeServiceManager() as manager:
if "roundcube" not in manager.services:
return True
return False
def migrate(self) -> None:
with FlakeServiceManager() as manager:
if "roundcube" not in manager.services:
manager.services["roundcube"] = (
"git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes&dir=sp-modules/roundcube"
)

View file

@ -5,13 +5,13 @@ from selfprivacy_api.jobs import JobStatus, Jobs
class CheckForSystemRebuildJobs(Migration):
"""Check if there are unfinished system rebuild jobs and finish them"""
def get_migration_name(self):
def get_migration_name(self) -> str:
return "check_for_system_rebuild_jobs"
def get_migration_description(self):
def get_migration_description(self) -> str:
return "Check if there are unfinished system rebuild jobs and finish them"
def is_migration_needed(self):
def is_migration_needed(self) -> bool:
# Check if there are any unfinished system rebuild jobs
for job in Jobs.get_jobs():
if (
@ -25,8 +25,9 @@ class CheckForSystemRebuildJobs(Migration):
JobStatus.RUNNING,
]:
return True
return False
def migrate(self):
def migrate(self) -> None:
# As the API is restarted, we assume that the jobs are finished
for job in Jobs.get_jobs():
if (

View file

@ -12,17 +12,17 @@ class Migration(ABC):
"""
@abstractmethod
def get_migration_name(self):
def get_migration_name(self) -> str:
pass
@abstractmethod
def get_migration_description(self):
def get_migration_description(self) -> str:
pass
@abstractmethod
def is_migration_needed(self):
def is_migration_needed(self) -> bool:
pass
@abstractmethod
def migrate(self):
def migrate(self) -> None:
pass

View file

@ -1,3 +1,4 @@
import logging
from datetime import datetime
from typing import Optional
from selfprivacy_api.migrations.migration import Migration
@ -11,14 +12,16 @@ from selfprivacy_api.repositories.tokens.abstract_tokens_repository import (
)
from selfprivacy_api.utils import ReadUserData, UserDataFiles
logger = logging.getLogger(__name__)
class WriteTokenToRedis(Migration):
"""Load Json tokens into Redis"""
def get_migration_name(self):
def get_migration_name(self) -> str:
return "write_token_to_redis"
def get_migration_description(self):
def get_migration_description(self) -> str:
return "Loads the initial token into redis token storage"
def is_repo_empty(self, repo: AbstractTokensRepository) -> bool:
@ -35,29 +38,30 @@ class WriteTokenToRedis(Migration):
created_at=datetime.now(),
)
except Exception as e:
print(e)
logging.error(e)
return None
def is_migration_needed(self):
def is_migration_needed(self) -> bool:
try:
if self.get_token_from_json() is not None and self.is_repo_empty(
RedisTokensRepository()
):
return True
except Exception as e:
print(e)
logging.error(e)
return False
return False
def migrate(self):
def migrate(self) -> None:
# Write info about providers to userdata.json
try:
token = self.get_token_from_json()
if token is None:
print("No token found in secrets.json")
logging.error("No token found in secrets.json")
return
RedisTokensRepository()._store_token(token)
print("Done")
logging.error("Done")
except Exception as e:
print(e)
print("Error migrating access tokens from json to redis")
logging.error(e)
logging.error("Error migrating access tokens from json to redis")

View file

@ -1,6 +1,16 @@
from enum import Enum
from typing import Optional
from pydantic import BaseModel
from typing import Optional, List
from pydantic import BaseModel, ConfigDict
from pydantic.alias_generators import to_camel
from selfprivacy_api.services.owned_path import OwnedPath
class BaseSchema(BaseModel):
model_config = ConfigDict(
alias_generator=to_camel,
populate_by_name=True,
from_attributes=True,
)
class ServiceStatus(Enum):
@ -15,10 +25,71 @@ class ServiceStatus(Enum):
OFF = "OFF"
class SupportLevel(Enum):
"""Enum representing the support level of a service."""
NORMAL = "normal"
EXPERIMENTAL = "experimental"
DEPRECATED = "deprecated"
COMMUNITY = "community"
UNKNOWN = "unknown"
@classmethod
def from_str(cls, support_level: str) -> "SupportLevel":
"""Return the SupportLevel from a string."""
if support_level == "normal":
return cls.NORMAL
if support_level == "experimental":
return cls.EXPERIMENTAL
if support_level == "deprecated":
return cls.DEPRECATED
if support_level == "community":
return cls.COMMUNITY
return cls.UNKNOWN
class ServiceDnsRecord(BaseModel):
type: str
name: str
content: str
ttl: int
display_name: str
priority: Optional[int] = None
class License(BaseSchema):
"""Model representing a license."""
deprecated: bool
free: bool
full_name: str
redistributable: bool
short_name: str
spdx_id: str
url: str
class ServiceMetaData(BaseSchema):
"""Model representing the meta data of a service."""
id: str
name: str
description: str = "No description found!"
svg_icon: str = ""
showUrl: bool = True
primary_subdomain: Optional[str] = None
is_movable: bool = False
is_required: bool = False
can_be_backed_up: bool = True
backup_description: str = "No backup description found!"
systemd_services: List[str]
user: Optional[str] = None
group: Optional[str] = None
folders: List[str] = []
owned_folders: List[OwnedPath] = []
postgre_databases: List[str] = []
license: List[License] = []
homepage: Optional[str] = None
source_page: Optional[str] = None
support_level: SupportLevel = SupportLevel.UNKNOWN

View file

@ -1,6 +1,7 @@
"""
New device key used to obtain access token.
"""
from datetime import datetime, timedelta, timezone
import secrets
from pydantic import BaseModel

View file

@ -3,6 +3,7 @@ Recovery key used to obtain access token.
Recovery key has a token string, date of creation, optional date of expiration and optional count of uses left.
"""
from datetime import datetime, timezone
import secrets
from typing import Optional

View file

@ -3,6 +3,7 @@ Model of the access token.
Access token has a token string, device name and date of creation.
"""
from datetime import datetime
import secrets
from pydantic import BaseModel

View file

@ -1,6 +1,7 @@
"""
Token repository using Redis as backend.
"""
from typing import Any, Optional
from datetime import datetime
from hashlib import md5

View file

@ -1,73 +1,343 @@
"""Services module."""
import logging
import base64
import typing
from selfprivacy_api.services.bitwarden import Bitwarden
from selfprivacy_api.services.gitea import Gitea
from selfprivacy_api.services.jitsimeet import JitsiMeet
import subprocess
import json
from typing import List
from os import path
from os import makedirs
from os import listdir
from os.path import join
from functools import lru_cache
from shutil import copyfile, copytree, rmtree
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.services.prometheus import Prometheus
from selfprivacy_api.services.mailserver import MailServer
from selfprivacy_api.services.nextcloud import Nextcloud
from selfprivacy_api.services.pleroma import Pleroma
from selfprivacy_api.services.ocserv import Ocserv
from selfprivacy_api.services.service import Service, ServiceDnsRecord
from selfprivacy_api.services.service import ServiceStatus
from selfprivacy_api.utils.cached_call import get_ttl_hash
import selfprivacy_api.utils.network as network_utils
services: list[Service] = [
Bitwarden(),
Gitea(),
MailServer(),
Nextcloud(),
Pleroma(),
Ocserv(),
JitsiMeet(),
]
from selfprivacy_api.services.api_icon import API_ICON
from selfprivacy_api.utils import USERDATA_FILE, DKIM_DIR, SECRETS_FILE
from selfprivacy_api.utils.block_devices import BlockDevices
from selfprivacy_api.utils import read_account_uri
from selfprivacy_api.services.templated_service import (
SP_MODULES_DEFENITIONS_PATH,
SP_SUGGESTED_MODULES_PATH,
TemplatedService,
)
CONFIG_STASH_DIR = "/etc/selfprivacy/dump"
logger = logging.getLogger(__name__)
def get_all_services() -> list[Service]:
return services
class ServiceManager(Service):
folders: List[str] = [CONFIG_STASH_DIR]
@staticmethod
def get_all_services() -> list[Service]:
return get_services()
def get_service_by_id(service_id: str) -> typing.Optional[Service]:
for service in services:
if service.get_id() == service_id:
return service
return None
@staticmethod
def get_service_by_id(service_id: str) -> typing.Optional[Service]:
for service in get_services():
if service.get_id() == service_id:
return service
return None
@staticmethod
def get_enabled_services() -> list[Service]:
return [service for service in get_services() if service.is_enabled()]
def get_enabled_services() -> list[Service]:
return [service for service in services if service.is_enabled()]
# This one is not currently used by any code.
@staticmethod
def get_disabled_services() -> list[Service]:
return [service for service in get_services() if not service.is_enabled()]
def get_disabled_services() -> list[Service]:
return [service for service in services if not service.is_enabled()]
def get_services_by_location(location: str) -> list[Service]:
return [service for service in services if service.get_drive() == location]
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
dns_records: list[ServiceDnsRecord] = [
ServiceDnsRecord(
type="A",
name="api",
content=ip4,
ttl=3600,
display_name="SelfPrivacy API",
),
]
if ip6 is not None:
dns_records.append(
ServiceDnsRecord(
type="AAAA",
name="api",
content=ip6,
ttl=3600,
display_name="SelfPrivacy API (IPv6)",
@staticmethod
def get_services_by_location(location: str) -> list[Service]:
return [
service
for service in get_services(
exclude_remote=True,
)
if service.get_drive() == location
]
@staticmethod
def get_all_required_dns_records() -> list[ServiceDnsRecord]:
ip4 = network_utils.get_ip4()
ip6 = network_utils.get_ip6()
dns_records: list[ServiceDnsRecord] = []
# TODO: Reenable with 3.6.0 release when clients are ready.
# Do not forget about tests!
# try:
# dns_records.append(
# ServiceDnsRecord(
# type="CAA",
# name=get_domain(),
# content=f'128 issue "letsencrypt.org;accounturi={read_account_uri()}"',
# ttl=3600,
# display_name="CAA record",
# )
# )
# except Exception as e:
# logging.error(f"Error creating CAA: {e}")
for service in ServiceManager.get_enabled_services():
dns_records += service.get_dns_records(ip4, ip6)
return dns_records
@staticmethod
def get_id() -> str:
"""Return service id."""
return "selfprivacy-api"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Selfprivacy API"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Enables communication between the SelfPrivacy app and the server."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
# return ""
return base64.b64encode(API_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
return None
@staticmethod
def get_subdomain() -> typing.Optional[str]:
return "api"
@staticmethod
def is_always_active() -> bool:
return True
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return True
@staticmethod
def is_enabled() -> bool:
return True
@staticmethod
def is_system_service() -> bool:
return True
@staticmethod
def get_backup_description() -> str:
return "General server settings."
@classmethod
def get_status(cls) -> ServiceStatus:
return ServiceStatus.ACTIVE
@classmethod
def can_be_backed_up(cls) -> bool:
"""`True` if the service can be backed up."""
return True
@classmethod
def merge_settings(cls):
# For now we will just copy settings EXCEPT the locations of services
# Stash locations as they are set by user right now
locations = {}
for service in get_services(
exclude_remote=True,
):
if service.is_movable():
locations[service.get_id()] = service.get_drive()
# Copy files
for p in [USERDATA_FILE, SECRETS_FILE, DKIM_DIR]:
cls.retrieve_stashed_path(p)
# Pop location
for service in get_services(
exclude_remote=True,
):
if service.is_movable():
device = BlockDevices().get_block_device(locations[service.get_id()])
if device is not None:
service.set_location(device)
@classmethod
def stop(cls):
"""
We are always active
"""
raise ValueError("tried to stop an always active service")
@classmethod
def start(cls):
"""
We are always active
"""
pass
@classmethod
def restart(cls):
"""
We are always active
"""
pass
@classmethod
def get_drive(cls) -> str:
return BlockDevices().get_root_block_device().name
@classmethod
def get_folders(cls) -> List[str]:
return cls.folders
@classmethod
def stash_for(cls, p: str) -> str:
basename = path.basename(p)
stashed_file_location = join(cls.dump_dir(), basename)
return stashed_file_location
@classmethod
def stash_a_path(cls, p: str):
if path.isdir(p):
rmtree(cls.stash_for(p), ignore_errors=True)
copytree(p, cls.stash_for(p))
else:
copyfile(p, cls.stash_for(p))
@classmethod
def retrieve_stashed_path(cls, p: str):
"""
Takes an original path, hopefully it is stashed somewhere
"""
if path.isdir(p):
rmtree(p, ignore_errors=True)
copytree(cls.stash_for(p), p)
else:
copyfile(cls.stash_for(p), p)
@classmethod
def pre_backup(cls, job: Job):
Jobs.update(
job,
status_text="Stashing settings",
status=JobStatus.RUNNING,
)
for service in get_enabled_services():
dns_records += service.get_dns_records(ip4, ip6)
return dns_records
tempdir = cls.dump_dir()
rmtree(join(tempdir), ignore_errors=True)
makedirs(tempdir)
for p in [USERDATA_FILE, SECRETS_FILE, DKIM_DIR]:
cls.stash_a_path(p)
@classmethod
def post_backup(cls, job: Job):
rmtree(cls.dump_dir(), ignore_errors=True)
@classmethod
def dump_dir(cls) -> str:
"""
A directory we dump our settings into
"""
return cls.folders[0]
@classmethod
def post_restore(cls, job: Job):
cls.merge_settings()
rmtree(cls.dump_dir(), ignore_errors=True)
# @redis_cached_call(ttl=30)
@lru_cache()
def get_templated_service(service_id: str, ttl_hash=None) -> TemplatedService:
del ttl_hash
return TemplatedService(service_id)
# @redis_cached_call(ttl=3600)
@lru_cache()
def get_remote_service(id: str, url: str, ttl_hash=None) -> TemplatedService:
del ttl_hash
response = subprocess.run(
["sp-fetch-remote-module", url],
capture_output=True,
text=True,
check=True,
)
return TemplatedService(id, response.stdout)
DUMMY_SERVICES = []
TEST_FLAGS: list[str] = []
def get_services(exclude_remote=False) -> List[Service]:
if "ONLY_DUMMY_SERVICE" in TEST_FLAGS:
return DUMMY_SERVICES
if "DUMMY_SERVICE_AND_API" in TEST_FLAGS:
return DUMMY_SERVICES + [ServiceManager()]
hardcoded_services: list[Service] = [
MailServer(),
ServiceManager(),
Prometheus(),
]
if DUMMY_SERVICES:
hardcoded_services += DUMMY_SERVICES
service_ids = [service.get_id() for service in hardcoded_services]
templated_services: List[Service] = []
if path.exists(SP_MODULES_DEFENITIONS_PATH):
for module in listdir(SP_MODULES_DEFENITIONS_PATH):
if module in service_ids:
continue
try:
templated_services.append(
get_templated_service(module, ttl_hash=get_ttl_hash(30))
)
service_ids.append(module)
except Exception as e:
logger.error(f"Failed to load service {module}: {e}")
if not exclude_remote and path.exists(SP_SUGGESTED_MODULES_PATH):
# It is a file with a JSON array
with open(SP_SUGGESTED_MODULES_PATH) as f:
suggested_modules = json.load(f)
for module in suggested_modules:
if module in service_ids:
continue
try:
templated_services.append(
get_remote_service(
module,
f"git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes&dir=sp-modules/{module}",
ttl_hash=get_ttl_hash(3600),
)
)
service_ids.append(module)
except Exception as e:
logger.error(f"Failed to load service {module}: {e}")
return hardcoded_services + templated_services

View file

@ -0,0 +1,5 @@
API_ICON = """
<svg width="33" height="33" viewBox="0 0 33 33" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M0.98671 4.79425C0.98671 2.58511 2.77757 0.79425 4.98671 0.79425H28.9867C31.1958 0.79425 32.9867 2.58511 32.9867 4.79425V28.7943C32.9867 31.0034 31.1958 32.7943 28.9867 32.7943H4.98671C2.77757 32.7943 0.98671 31.0034 0.98671 28.7943V4.79425ZM26.9867 21.1483L24.734 18.8956V18.8198H24.6582L22.5047 16.6674V18.8198H11.358V23.2785H22.5047V25.6315L26.9867 21.1483ZM9.23944 10.1584H9.26842L11.4688 7.95697V10.1584H22.6154V14.6171H11.4688V16.9233L6.98671 12.439L9.23944 10.1863V10.1584Z" fill="black"/>
</svg>
"""

View file

@ -1,101 +0,0 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.utils import get_domain
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.bitwarden.icon import BITWARDEN_ICON
class Bitwarden(Service):
"""Class representing Bitwarden service."""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "bitwarden"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Bitwarden"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Bitwarden is a password manager."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_user() -> str:
return "vaultwarden"
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://password.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "password"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Password database, encryption certificate and attachments."
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Bitwarden status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("vaultwarden.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "vaultwarden.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "vaultwarden.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "vaultwarden.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/lib/bitwarden", "/var/lib/bitwarden_rs"]

View file

@ -1,3 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.125 2C4.2962 2 3.50134 2.32924 2.91529 2.91529C2.32924 3.50134 2 4.2962 2 5.125L2 18.875C2 19.7038 2.32924 20.4987 2.91529 21.0847C3.50134 21.6708 4.2962 22 5.125 22H18.875C19.7038 22 20.4987 21.6708 21.0847 21.0847C21.6708 20.4987 22 19.7038 22 18.875V5.125C22 4.2962 21.6708 3.50134 21.0847 2.91529C20.4987 2.32924 19.7038 2 18.875 2H5.125ZM6.25833 4.43333H17.7583C17.9317 4.43333 18.0817 4.49667 18.2083 4.62333C18.2688 4.68133 18.3168 4.7511 18.3494 4.82835C18.3819 4.9056 18.3983 4.98869 18.3975 5.0725V12.7392C18.3975 13.3117 18.2858 13.8783 18.0633 14.4408C17.8558 14.9751 17.5769 15.4789 17.2342 15.9383C16.8824 16.3987 16.4882 16.825 16.0567 17.2117C15.6008 17.6242 15.18 17.9667 14.7942 18.24C14.4075 18.5125 14.005 18.77 13.5858 19.0133C13.1667 19.2558 12.8692 19.4208 12.6925 19.5075C12.5158 19.5942 12.375 19.6608 12.2675 19.7075C12.1872 19.7472 12.0987 19.7674 12.0092 19.7667C11.919 19.7674 11.8299 19.7468 11.7492 19.7067C11.6062 19.6429 11.4645 19.5762 11.3242 19.5067C11.0218 19.3511 10.7242 19.1866 10.4317 19.0133C10.0175 18.7738 9.6143 18.5158 9.22333 18.24C8.7825 17.9225 8.36093 17.5791 7.96083 17.2117C7.52907 16.825 7.13456 16.3987 6.7825 15.9383C6.44006 15.4788 6.16141 14.9751 5.95417 14.4408C5.73555 13.9 5.62213 13.3225 5.62 12.7392V5.0725C5.62 4.89917 5.68333 4.75 5.80917 4.6225C5.86726 4.56188 5.93717 4.51382 6.01457 4.48129C6.09196 4.44875 6.17521 4.43243 6.25917 4.43333H6.25833ZM12.0083 6.35V17.7C12.8 17.2817 13.5092 16.825 14.135 16.3333C15.6992 15.1083 16.4808 13.9108 16.4808 12.7392V6.35H12.0083Z" fill="black"/>
</svg>

Before

(image error) Size: 1.6 KiB

View file

@ -1,5 +0,0 @@
BITWARDEN_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.125 2C4.2962 2 3.50134 2.32924 2.91529 2.91529C2.32924 3.50134 2 4.2962 2 5.125L2 18.875C2 19.7038 2.32924 20.4987 2.91529 21.0847C3.50134 21.6708 4.2962 22 5.125 22H18.875C19.7038 22 20.4987 21.6708 21.0847 21.0847C21.6708 20.4987 22 19.7038 22 18.875V5.125C22 4.2962 21.6708 3.50134 21.0847 2.91529C20.4987 2.32924 19.7038 2 18.875 2H5.125ZM6.25833 4.43333H17.7583C17.9317 4.43333 18.0817 4.49667 18.2083 4.62333C18.2688 4.68133 18.3168 4.7511 18.3494 4.82835C18.3819 4.9056 18.3983 4.98869 18.3975 5.0725V12.7392C18.3975 13.3117 18.2858 13.8783 18.0633 14.4408C17.8558 14.9751 17.5769 15.4789 17.2342 15.9383C16.8824 16.3987 16.4882 16.825 16.0567 17.2117C15.6008 17.6242 15.18 17.9667 14.7942 18.24C14.4075 18.5125 14.005 18.77 13.5858 19.0133C13.1667 19.2558 12.8692 19.4208 12.6925 19.5075C12.5158 19.5942 12.375 19.6608 12.2675 19.7075C12.1872 19.7472 12.0987 19.7674 12.0092 19.7667C11.919 19.7674 11.8299 19.7468 11.7492 19.7067C11.6062 19.6429 11.4645 19.5762 11.3242 19.5067C11.0218 19.3511 10.7242 19.1866 10.4317 19.0133C10.0175 18.7738 9.6143 18.5158 9.22333 18.24C8.7825 17.9225 8.36093 17.5791 7.96083 17.2117C7.52907 16.825 7.13456 16.3987 6.7825 15.9383C6.44006 15.4788 6.16141 14.9751 5.95417 14.4408C5.73555 13.9 5.62213 13.3225 5.62 12.7392V5.0725C5.62 4.89917 5.68333 4.75 5.80917 4.6225C5.86726 4.56188 5.93717 4.51382 6.01457 4.48129C6.09196 4.44875 6.17521 4.43243 6.25917 4.43333H6.25833ZM12.0083 6.35V17.7C12.8 17.2817 13.5092 16.825 14.135 16.3333C15.6992 15.1083 16.4808 13.9108 16.4808 12.7392V6.35H12.0083Z" fill="black"/>
</svg>
"""

View file

@ -0,0 +1,259 @@
from abc import ABC, abstractmethod
import re
from typing import Optional
from selfprivacy_api.utils import (
ReadUserData,
WriteUserData,
check_if_subdomain_is_taken,
)
class ServiceConfigItem(ABC):
id: str
description: str
widget: str
type: str
weight: int
@abstractmethod
def get_value(self, service_id):
pass
@abstractmethod
def set_value(self, value, service_id):
pass
@abstractmethod
def validate_value(self, value):
return True
def as_dict(self, service_id: str):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_id),
"weight": self.weight,
}
class StringServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: str,
description: str,
regex: Optional[str] = None,
widget: Optional[str] = None,
allow_empty: bool = False,
weight: int = 50,
):
if widget == "subdomain" and not regex:
raise ValueError("Subdomain widget requires regex")
self.id = id
self.type = "string"
self.default_value = default_value
self.description = description
self.regex = re.compile(regex) if regex else None
self.widget = widget if widget else "text"
self.allow_empty = allow_empty
self.weight = weight
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not valid")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_id):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_id),
"default_value": self.default_value,
"regex": self.regex.pattern if self.regex else None,
"weight": self.weight,
}
def validate_value(self, value):
if not isinstance(value, str):
return False
if not self.allow_empty and not value:
return False
if self.regex and not self.regex.match(value):
return False
if self.widget == "subdomain":
if check_if_subdomain_is_taken(value):
return False
return True
class BoolServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: bool,
description: str,
widget: Optional[str] = None,
weight: int = 50,
):
self.id = id
self.type = "bool"
self.default_value = default_value
self.description = description
self.widget = widget if widget else "switch"
self.weight = weight
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not a boolean")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_id):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_id),
"default_value": self.default_value,
"weight": self.weight,
}
def validate_value(self, value):
return isinstance(value, bool)
class EnumServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: str,
description: str,
options: list[str],
widget: Optional[str] = None,
weight: int = 50,
):
self.id = id
self.type = "enum"
self.default_value = default_value
self.description = description
self.options = options
self.widget = widget if widget else "select"
self.weight = weight
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not in options")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_id):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_id),
"default_value": self.default_value,
"options": self.options,
"weight": self.weight,
}
def validate_value(self, value):
if not isinstance(value, str):
return False
return value in self.options
# TODO: unused for now
class IntServiceConfigItem(ServiceConfigItem):
def __init__(
self,
id: str,
default_value: int,
description: str,
widget: Optional[str] = None,
min_value: Optional[int] = None,
max_value: Optional[int] = None,
weight: int = 50,
) -> None:
self.id = id
self.type = "int"
self.default_value = default_value
self.description = description
self.widget = widget if widget else "number"
self.min_value = min_value
self.max_value = max_value
self.weight = weight
def get_value(self, service_id):
with ReadUserData() as user_data:
if "modules" in user_data and service_id in user_data["modules"]:
return user_data["modules"][service_id].get(self.id, self.default_value)
return self.default_value
def set_value(self, value, service_id):
if not self.validate_value(value):
raise ValueError(f"Value {value} is not valid")
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id][self.id] = value
def as_dict(self, service_id):
return {
"id": self.id,
"type": self.type,
"description": self.description,
"widget": self.widget,
"value": self.get_value(service_id),
"default_value": self.default_value,
"min_value": self.min_value,
"max_value": self.max_value,
"weight": self.weight,
}
def validate_value(self, value):
if not isinstance(value, int):
return False
return (self.min_value is None or value >= self.min_value) and (
self.max_value is None or value <= self.max_value
)

View file

@ -0,0 +1,53 @@
import re
from typing import Tuple, Optional
FLAKE_CONFIG_PATH = "/etc/nixos/sp-modules/flake.nix"
class FlakeServiceManager:
def __enter__(self) -> "FlakeServiceManager":
self.services = {}
with open(FLAKE_CONFIG_PATH, "r") as file:
for line in file:
service_name, url = self._extract_services(input_string=line)
if service_name and url:
self.services[service_name] = url
return self
def _extract_services(
self, input_string: str
) -> Tuple[Optional[str], Optional[str]]:
pattern = r"inputs\.([\w-]+)\.url\s*=\s*([\S]+);"
match = re.search(pattern, input_string)
if match:
variable_name = match.group(1)
url = match.group(2)
return variable_name, url
else:
return None, None
def __exit__(self, exc_type, exc_value, traceback) -> None:
with open(FLAKE_CONFIG_PATH, "w") as file:
file.write(
"""
{
description = "SelfPrivacy NixOS PoC modules/extensions/bundles/packages/etc";\n
"""
)
for key, value in self.services.items():
file.write(
f"""
inputs.{key}.url = {value};
"""
)
file.write(
"""
outputs = _: { };
}
"""
)

View file

@ -1,5 +1,9 @@
"""Generic size counter using pathlib"""
import pathlib
import logging
logger = logging.getLogger(__name__)
def get_storage_usage(path: str) -> int:
@ -17,5 +21,5 @@ def get_storage_usage(path: str) -> int:
except FileNotFoundError:
pass
except Exception as error:
print(error)
logging.error(error)
return storage_usage

View file

@ -1,96 +0,0 @@
"""Class representing Bitwarden service"""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.utils import get_domain
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.gitea.icon import GITEA_ICON
class Gitea(Service):
"""Class representing Gitea service"""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "gitea"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Gitea"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Gitea is a Git forge."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(GITEA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://git.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "git"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Git repositories, database and user data."
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Gitea status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("gitea.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "gitea.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "gitea.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "gitea.service"])
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/lib/gitea"]

View file

@ -1,3 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.60007 10.5899L8.38007 4.79995L10.0701 6.49995C9.83007 7.34995 10.2201 8.27995 11.0001 8.72995V14.2699C10.4001 14.6099 10.0001 15.2599 10.0001 15.9999C10.0001 16.5304 10.2108 17.0391 10.5859 17.4142C10.9609 17.7892 11.4696 17.9999 12.0001 17.9999C12.5305 17.9999 13.0392 17.7892 13.4143 17.4142C13.7894 17.0391 14.0001 16.5304 14.0001 15.9999C14.0001 15.2599 13.6001 14.6099 13.0001 14.2699V9.40995L15.0701 11.4999C15.0001 11.6499 15.0001 11.8199 15.0001 11.9999C15.0001 12.5304 15.2108 13.0391 15.5859 13.4142C15.9609 13.7892 16.4696 13.9999 17.0001 13.9999C17.5305 13.9999 18.0392 13.7892 18.4143 13.4142C18.7894 13.0391 19.0001 12.5304 19.0001 11.9999C19.0001 11.4695 18.7894 10.9608 18.4143 10.5857C18.0392 10.2107 17.5305 9.99995 17.0001 9.99995C16.8201 9.99995 16.6501 9.99995 16.5001 10.0699L13.9301 7.49995C14.1901 6.56995 13.7101 5.54995 12.7801 5.15995C12.3501 4.99995 11.9001 4.95995 11.5001 5.06995L9.80007 3.37995L10.5901 2.59995C11.3701 1.80995 12.6301 1.80995 13.4101 2.59995L21.4001 10.5899C22.1901 11.3699 22.1901 12.6299 21.4001 13.4099L13.4101 21.3999C12.6301 22.1899 11.3701 22.1899 10.5901 21.3999L2.60007 13.4099C1.81007 12.6299 1.81007 11.3699 2.60007 10.5899Z" fill="black"/>
</svg>

Before

(image error) Size: 1.3 KiB

View file

@ -1,5 +0,0 @@
GITEA_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.60007 10.5899L8.38007 4.79995L10.0701 6.49995C9.83007 7.34995 10.2201 8.27995 11.0001 8.72995V14.2699C10.4001 14.6099 10.0001 15.2599 10.0001 15.9999C10.0001 16.5304 10.2108 17.0391 10.5859 17.4142C10.9609 17.7892 11.4696 17.9999 12.0001 17.9999C12.5305 17.9999 13.0392 17.7892 13.4143 17.4142C13.7894 17.0391 14.0001 16.5304 14.0001 15.9999C14.0001 15.2599 13.6001 14.6099 13.0001 14.2699V9.40995L15.0701 11.4999C15.0001 11.6499 15.0001 11.8199 15.0001 11.9999C15.0001 12.5304 15.2108 13.0391 15.5859 13.4142C15.9609 13.7892 16.4696 13.9999 17.0001 13.9999C17.5305 13.9999 18.0392 13.7892 18.4143 13.4142C18.7894 13.0391 19.0001 12.5304 19.0001 11.9999C19.0001 11.4695 18.7894 10.9608 18.4143 10.5857C18.0392 10.2107 17.5305 9.99995 17.0001 9.99995C16.8201 9.99995 16.6501 9.99995 16.5001 10.0699L13.9301 7.49995C14.1901 6.56995 13.7101 5.54995 12.7801 5.15995C12.3501 4.99995 11.9001 4.95995 11.5001 5.06995L9.80007 3.37995L10.5901 2.59995C11.3701 1.80995 12.6301 1.80995 13.4101 2.59995L21.4001 10.5899C22.1901 11.3699 22.1901 12.6299 21.4001 13.4099L13.4101 21.3999C12.6301 22.1899 11.3701 22.1899 10.5901 21.3999L2.60007 13.4099C1.81007 12.6299 1.81007 11.3699 2.60007 10.5899Z" fill="black"/>
</svg>
"""

View file

@ -1,108 +0,0 @@
"""Class representing Jitsi Meet service"""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.jobs import Job
from selfprivacy_api.utils.systemd import (
get_service_status_from_several_units,
)
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.utils import get_domain
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.jitsimeet.icon import JITSI_ICON
class JitsiMeet(Service):
"""Class representing Jitsi service"""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "jitsi-meet"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "JitsiMeet"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Jitsi Meet is a free and open-source video conferencing solution."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(JITSI_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://meet.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "meet"
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Secrets that are used to encrypt the communication."
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status_from_several_units(
["jitsi-videobridge.service", "jicofo.service"]
)
@staticmethod
def stop():
subprocess.run(
["systemctl", "stop", "jitsi-videobridge.service"],
check=False,
)
subprocess.run(["systemctl", "stop", "jicofo.service"], check=False)
@staticmethod
def start():
subprocess.run(
["systemctl", "start", "jitsi-videobridge.service"],
check=False,
)
subprocess.run(["systemctl", "start", "jicofo.service"], check=False)
@staticmethod
def restart():
subprocess.run(
["systemctl", "restart", "jitsi-videobridge.service"],
check=False,
)
subprocess.run(["systemctl", "restart", "jicofo.service"], check=False)
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/lib/jitsi-meet"]
def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("jitsi-meet service is not movable")

View file

@ -1,5 +0,0 @@
JITSI_ICON = """
<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M26.6665 2.66663H5.33317C3.8665 2.66663 2.67984 3.86663 2.67984 5.33329L2.6665 29.3333L7.99984 24H26.6665C28.1332 24 29.3332 22.8 29.3332 21.3333V5.33329C29.3332 3.86663 28.1332 2.66663 26.6665 2.66663ZM26.6665 21.3333H6.89317L5.33317 22.8933V5.33329H26.6665V21.3333ZM18.6665 14.1333L22.6665 17.3333V9.33329L18.6665 12.5333V9.33329H9.33317V17.3333H18.6665V14.1333Z" fill="black"/>
</svg>
"""

View file

@ -35,13 +35,13 @@ class MailServer(Service):
def get_user() -> str:
return "virtualMail"
@staticmethod
def get_url() -> Optional[str]:
@classmethod
def get_url(cls) -> Optional[str]:
"""Return service url."""
return None
@staticmethod
def get_subdomain() -> Optional[str]:
@classmethod
def get_subdomain(cls) -> Optional[str]:
return None
@staticmethod
@ -89,18 +89,6 @@ class MailServer(Service):
subprocess.run(["systemctl", "restart", "dovecot2.service"], check=False)
subprocess.run(["systemctl", "restart", "postfix.service"], check=False)
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/vmail", "/var/sieve"]

View file

@ -1,104 +0,0 @@
"""Class representing Nextcloud service."""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.utils import get_domain
from selfprivacy_api.jobs import Job, Jobs
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.nextcloud.icon import NEXTCLOUD_ICON
class Nextcloud(Service):
"""Class representing Nextcloud service."""
@staticmethod
def get_id() -> str:
"""Return service id."""
return "nextcloud"
@staticmethod
def get_display_name() -> str:
"""Return service display name."""
return "Nextcloud"
@staticmethod
def get_description() -> str:
"""Return service description."""
return "Nextcloud is a cloud storage service that offers a web interface and a desktop client."
@staticmethod
def get_svg_icon() -> str:
"""Read SVG icon from file and return it as base64 encoded string."""
return base64.b64encode(NEXTCLOUD_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://cloud.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "cloud"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "All the files and other data stored in Nextcloud."
@staticmethod
def get_status() -> ServiceStatus:
"""
Return Nextcloud status from systemd.
Use command return code to determine status.
Return code 0 means service is running.
Return code 1 or 2 means service is in error stat.
Return code 3 means service is stopped.
Return code 4 means service is off.
"""
return get_service_status("phpfpm-nextcloud.service")
@staticmethod
def stop():
"""Stop Nextcloud service."""
subprocess.Popen(["systemctl", "stop", "phpfpm-nextcloud.service"])
@staticmethod
def start():
"""Start Nextcloud service."""
subprocess.Popen(["systemctl", "start", "phpfpm-nextcloud.service"])
@staticmethod
def restart():
"""Restart Nextcloud service."""
subprocess.Popen(["systemctl", "restart", "phpfpm-nextcloud.service"])
@staticmethod
def get_configuration() -> dict:
"""Return Nextcloud configuration."""
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
"""Return Nextcloud logs."""
return ""
@staticmethod
def get_folders() -> List[str]:
return ["/var/lib/nextcloud"]

View file

@ -1,12 +0,0 @@
NEXTCLOUD_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4974)">
<path d="M12.018 6.53699C9.518 6.53699 7.418 8.24899 6.777 10.552C6.217 9.31999 4.984 8.44699 3.552 8.44699C2.61116 8.45146 1.71014 8.82726 1.04495 9.49264C0.379754 10.158 0.00420727 11.0591 0 12C0.00420727 12.9408 0.379754 13.842 1.04495 14.5073C1.71014 15.1727 2.61116 15.5485 3.552 15.553C4.984 15.553 6.216 14.679 6.776 13.447C7.417 15.751 9.518 17.463 12.018 17.463C14.505 17.463 16.594 15.77 17.249 13.486C17.818 14.696 19.032 15.553 20.447 15.553C21.3881 15.549 22.2895 15.1734 22.955 14.508C23.6205 13.8425 23.9961 12.9411 24 12C23.9958 11.059 23.6201 10.1577 22.9547 9.49229C22.2893 8.82688 21.388 8.4512 20.447 8.44699C19.031 8.44699 17.817 9.30499 17.248 10.514C16.594 8.22999 14.505 6.53599 12.018 6.53699ZM12.018 8.62199C13.896 8.62199 15.396 10.122 15.396 12C15.396 13.878 13.896 15.378 12.018 15.378C11.5739 15.38 11.1338 15.2939 10.7231 15.1249C10.3124 14.9558 9.93931 14.707 9.62532 14.393C9.31132 14.0789 9.06267 13.7057 8.89373 13.295C8.72478 12.8842 8.63888 12.4441 8.641 12C8.641 10.122 10.141 8.62199 12.018 8.62199ZM3.552 10.532C4.374 10.532 5.019 11.177 5.019 12C5.019 12.823 4.375 13.467 3.552 13.468C3.35871 13.47 3.16696 13.4334 2.988 13.3603C2.80905 13.2872 2.64648 13.1792 2.50984 13.0424C2.3732 12.9057 2.26524 12.7431 2.19229 12.5641C2.11934 12.3851 2.08286 12.1933 2.085 12C2.085 11.177 2.729 10.533 3.552 10.533V10.532ZM20.447 10.532C21.27 10.532 21.915 11.177 21.915 12C21.915 12.823 21.27 13.468 20.447 13.468C20.2537 13.47 20.062 13.4334 19.883 13.3603C19.704 13.2872 19.5415 13.1792 19.4048 13.0424C19.2682 12.9057 19.1602 12.7431 19.0873 12.5641C19.0143 12.3851 18.9779 12.1933 18.98 12C18.98 11.177 19.624 10.533 20.447 10.533V10.532Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4974">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>
"""

View file

@ -1,10 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4974)">
<path d="M12.018 6.53699C9.518 6.53699 7.418 8.24899 6.777 10.552C6.217 9.31999 4.984 8.44699 3.552 8.44699C2.61116 8.45146 1.71014 8.82726 1.04495 9.49264C0.379754 10.158 0.00420727 11.0591 0 12C0.00420727 12.9408 0.379754 13.842 1.04495 14.5073C1.71014 15.1727 2.61116 15.5485 3.552 15.553C4.984 15.553 6.216 14.679 6.776 13.447C7.417 15.751 9.518 17.463 12.018 17.463C14.505 17.463 16.594 15.77 17.249 13.486C17.818 14.696 19.032 15.553 20.447 15.553C21.3881 15.549 22.2895 15.1734 22.955 14.508C23.6205 13.8425 23.9961 12.9411 24 12C23.9958 11.059 23.6201 10.1577 22.9547 9.49229C22.2893 8.82688 21.388 8.4512 20.447 8.44699C19.031 8.44699 17.817 9.30499 17.248 10.514C16.594 8.22999 14.505 6.53599 12.018 6.53699ZM12.018 8.62199C13.896 8.62199 15.396 10.122 15.396 12C15.396 13.878 13.896 15.378 12.018 15.378C11.5739 15.38 11.1338 15.2939 10.7231 15.1249C10.3124 14.9558 9.93931 14.707 9.62532 14.393C9.31132 14.0789 9.06267 13.7057 8.89373 13.295C8.72478 12.8842 8.63888 12.4441 8.641 12C8.641 10.122 10.141 8.62199 12.018 8.62199ZM3.552 10.532C4.374 10.532 5.019 11.177 5.019 12C5.019 12.823 4.375 13.467 3.552 13.468C3.35871 13.47 3.16696 13.4334 2.988 13.3603C2.80905 13.2872 2.64648 13.1792 2.50984 13.0424C2.3732 12.9057 2.26524 12.7431 2.19229 12.5641C2.11934 12.3851 2.08286 12.1933 2.085 12C2.085 11.177 2.729 10.533 3.552 10.533V10.532ZM20.447 10.532C21.27 10.532 21.915 11.177 21.915 12C21.915 12.823 21.27 13.468 20.447 13.468C20.2537 13.47 20.062 13.4334 19.883 13.3603C19.704 13.2872 19.5415 13.1792 19.4048 13.0424C19.2682 12.9057 19.1602 12.7431 19.0873 12.5641C19.0143 12.3851 18.9779 12.1933 18.98 12C18.98 11.177 19.624 10.533 20.447 10.533V10.532Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4974">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>

Before

(image error) Size: 1.9 KiB

View file

@ -1,89 +0,0 @@
"""Class representing ocserv service."""
import base64
import subprocess
import typing
from selfprivacy_api.jobs import Job
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.utils.block_devices import BlockDevice
from selfprivacy_api.services.ocserv.icon import OCSERV_ICON
class Ocserv(Service):
"""Class representing ocserv service."""
@staticmethod
def get_id() -> str:
return "ocserv"
@staticmethod
def get_display_name() -> str:
return "OpenConnect VPN"
@staticmethod
def get_description() -> str:
return "OpenConnect VPN to connect your devices and access the internet."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(OCSERV_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
return None
@staticmethod
def get_subdomain() -> typing.Optional[str]:
return "vpn"
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def can_be_backed_up() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Nothing to backup."
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status("ocserv.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "ocserv.service"], check=False)
@staticmethod
def start():
subprocess.run(["systemctl", "start", "ocserv.service"], check=False)
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "ocserv.service"], check=False)
@staticmethod
def get_configuration():
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_folders() -> typing.List[str]:
return []
def move_to_volume(self, volume: BlockDevice) -> Job:
raise NotImplementedError("ocserv service is not movable")

View file

@ -1,5 +0,0 @@
OCSERV_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 1L3 5V11C3 16.55 6.84 21.74 12 23C17.16 21.74 21 16.55 21 11V5L12 1ZM12 11.99H19C18.47 16.11 15.72 19.78 12 20.93V12H5V6.3L12 3.19V11.99Z" fill="black"/>
</svg>
"""

View file

@ -1,3 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 1L3 5V11C3 16.55 6.84 21.74 12 23C17.16 21.74 21 16.55 21 11V5L12 1ZM12 11.99H19C18.47 16.11 15.72 19.78 12 20.93V12H5V6.3L12 3.19V11.99Z" fill="black"/>
</svg>

Before

(image error) Size: 270 B

View file

@ -1,11 +1,16 @@
from __future__ import annotations
import logging
import subprocess
import pathlib
from pydantic import BaseModel
from os.path import exists
from pydantic import BaseModel
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
logger = logging.getLogger(__name__)
# tests override it to a tmpdir
VOLUMES_PATH = "/volumes"
@ -87,7 +92,7 @@ class Bind:
check=True,
)
except subprocess.CalledProcessError as error:
print(error.stderr)
logging.error(error.stderr)
raise BindError(f"Unable to bind {source} to {target} :{error.stderr}")
def unbind(self) -> None:
@ -100,8 +105,8 @@ class Bind:
["umount", self.binding_path],
check=True,
)
except subprocess.CalledProcessError:
raise BindError(f"Unable to unmount folder {self.binding_path}.")
except subprocess.CalledProcessError as error:
raise BindError(f"Unable to unmount folder {self.binding_path}. {error}")
pass
def ensure_ownership(self) -> None:
@ -119,7 +124,7 @@ class Bind:
stderr=subprocess.PIPE,
)
except subprocess.CalledProcessError as error:
print(error.stderr)
logging.error(error.stderr)
error_message = (
f"Unable to set ownership of {true_location} :{error.stderr}"
)

View file

@ -1,104 +0,0 @@
"""Class representing Nextcloud service."""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.utils import get_domain
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.pleroma.icon import PLEROMA_ICON
class Pleroma(Service):
"""Class representing Pleroma service."""
@staticmethod
def get_id() -> str:
return "pleroma"
@staticmethod
def get_display_name() -> str:
return "Pleroma"
@staticmethod
def get_description() -> str:
return "Pleroma is a microblogging service that offers a web interface and a desktop client."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(PLEROMA_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
domain = get_domain()
return f"https://social.{domain}"
@staticmethod
def get_subdomain() -> Optional[str]:
return "social"
@staticmethod
def is_movable() -> bool:
return True
@staticmethod
def is_required() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Your Pleroma accounts, posts and media."
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status("pleroma.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "pleroma.service"])
subprocess.run(["systemctl", "stop", "postgresql.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "pleroma.service"])
subprocess.run(["systemctl", "start", "postgresql.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "pleroma.service"])
subprocess.run(["systemctl", "restart", "postgresql.service"])
@staticmethod
def get_configuration(config_items):
return {}
@staticmethod
def set_configuration(config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_owned_folders() -> List[OwnedPath]:
"""
Get a list of occupied directories with ownership info
Pleroma has folders that are owned by different users
"""
return [
OwnedPath(
path="/var/lib/pleroma",
owner="pleroma",
group="pleroma",
),
OwnedPath(
path="/var/lib/postgresql",
owner="postgres",
group="postgres",
),
]

View file

@ -1,12 +0,0 @@
PLEROMA_ICON = """
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4998)">
<path d="M6.35999 1.07076e-06C6.11451 -0.000261753 5.87139 0.0478616 5.64452 0.14162C5.41766 0.235378 5.21149 0.372932 5.03782 0.546418C4.86415 0.719904 4.72638 0.925919 4.63237 1.15269C4.53837 1.37945 4.48999 1.62252 4.48999 1.868V24H10.454V1.07076e-06H6.35999ZM13.473 1.07076e-06V12H17.641C18.1364 12 18.6115 11.8032 18.9619 11.4529C19.3122 11.1026 19.509 10.6274 19.509 10.132V1.07076e-06H13.473ZM13.473 18.036V24H17.641C18.1364 24 18.6115 23.8032 18.9619 23.4529C19.3122 23.1026 19.509 22.6274 19.509 22.132V18.036H13.473Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4998">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>
"""

View file

@ -1,10 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_51106_4998)">
<path d="M6.35999 1.07076e-06C6.11451 -0.000261753 5.87139 0.0478616 5.64452 0.14162C5.41766 0.235378 5.21149 0.372932 5.03782 0.546418C4.86415 0.719904 4.72638 0.925919 4.63237 1.15269C4.53837 1.37945 4.48999 1.62252 4.48999 1.868V24H10.454V1.07076e-06H6.35999ZM13.473 1.07076e-06V12H17.641C18.1364 12 18.6115 11.8032 18.9619 11.4529C19.3122 11.1026 19.509 10.6274 19.509 10.132V1.07076e-06H13.473ZM13.473 18.036V24H17.641C18.1364 24 18.6115 23.8032 18.9619 23.4529C19.3122 23.1026 19.509 22.6274 19.509 22.132V18.036H13.473Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_51106_4998">
<rect width="24" height="24" fill="white"/>
</clipPath>
</defs>
</svg>

Before

(image error) Size: 794 B

View file

@ -0,0 +1,86 @@
"""Class representing Nextcloud service."""
import base64
import subprocess
from typing import Optional, List
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.utils.systemd import get_service_status
from selfprivacy_api.services.service import Service, ServiceStatus
from selfprivacy_api.services.prometheus.icon import PROMETHEUS_ICON
class Prometheus(Service):
"""Class representing Prometheus service."""
@staticmethod
def get_id() -> str:
return "monitoring"
@staticmethod
def get_display_name() -> str:
return "Prometheus"
@staticmethod
def get_description() -> str:
return "Prometheus is used for resource monitoring and alerts."
@staticmethod
def get_svg_icon() -> str:
return base64.b64encode(PROMETHEUS_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> Optional[str]:
"""Return service url."""
return None
@staticmethod
def get_subdomain() -> Optional[str]:
return None
@staticmethod
def is_movable() -> bool:
return False
@staticmethod
def is_required() -> bool:
return True
@staticmethod
def is_system_service() -> bool:
return True
@staticmethod
def can_be_backed_up() -> bool:
return False
@staticmethod
def get_backup_description() -> str:
return "Backups are not available for Prometheus."
@staticmethod
def get_status() -> ServiceStatus:
return get_service_status("prometheus.service")
@staticmethod
def stop():
subprocess.run(["systemctl", "stop", "prometheus.service"])
@staticmethod
def start():
subprocess.run(["systemctl", "start", "prometheus.service"])
@staticmethod
def restart():
subprocess.run(["systemctl", "restart", "prometheus.service"])
@staticmethod
def get_owned_folders() -> List[OwnedPath]:
return [
OwnedPath(
path="/var/lib/prometheus",
owner="prometheus",
group="prometheus",
),
]

View file

@ -0,0 +1,5 @@
PROMETHEUS_ICON = """
<svg width="128" height="128" viewBox="0 0 128 128" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M64.125 0.51C99.229 0.517 128.045 29.133 128 63.951C127.955 99.293 99.258 127.515 63.392 127.49C28.325 127.466 -0.0249987 98.818 1.26289e-06 63.434C0.0230013 28.834 28.898 0.503 64.125 0.51ZM44.72 22.793C45.523 26.753 44.745 30.448 43.553 34.082C42.73 36.597 41.591 39.022 40.911 41.574C39.789 45.777 38.52 50.004 38.052 54.3C37.381 60.481 39.81 65.925 43.966 71.34L24.86 67.318C24.893 67.92 24.86 68.148 24.925 68.342C26.736 73.662 29.923 78.144 33.495 82.372C33.872 82.818 34.732 83.046 35.372 83.046C54.422 83.084 73.473 83.08 92.524 83.055C93.114 83.055 93.905 82.945 94.265 82.565C98.349 78.271 101.47 73.38 103.425 67.223L83.197 71.185C84.533 68.567 86.052 66.269 86.93 63.742C89.924 55.099 88.682 46.744 84.385 38.862C80.936 32.538 77.754 26.242 79.475 18.619C75.833 22.219 74.432 26.798 73.543 31.517C72.671 36.167 72.154 40.881 71.478 45.6C71.38 45.457 71.258 45.35 71.236 45.227C71.1507 44.7338 71.0919 44.2365 71.06 43.737C70.647 36.011 69.14 28.567 65.954 21.457C64.081 17.275 62.013 12.995 63.946 8.001C62.639 8.694 61.456 9.378 60.608 10.357C58.081 13.277 57.035 16.785 56.766 20.626C56.535 23.908 56.22 27.205 55.61 30.432C54.97 33.824 53.96 37.146 51.678 40.263C50.76 33.607 50.658 27.019 44.722 22.793H44.72ZM93.842 88.88H34.088V99.26H93.842V88.88ZM45.938 104.626C45.889 113.268 54.691 119.707 65.571 119.24C74.591 118.851 82.57 111.756 81.886 104.626H45.938Z" fill="black"/>
</svg>
"""

View file

@ -1,16 +1,26 @@
"""Abstract class for a service running on a server"""
from abc import ABC, abstractmethod
import logging
from typing import List, Optional
from os.path import exists
from selfprivacy_api import utils
from selfprivacy_api.utils import ReadUserData, WriteUserData
from selfprivacy_api.services.config_item import ServiceConfigItem
from selfprivacy_api.utils.default_subdomains import DEFAULT_SUBDOMAINS
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.utils.waitloop import wait_until_true
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
from selfprivacy_api.jobs import Job, Jobs, JobStatus, report_progress
from selfprivacy_api.jobs.upgrade_system import rebuild_system
from selfprivacy_api.models.services import ServiceStatus, ServiceDnsRecord
from selfprivacy_api.models.services import (
License,
ServiceStatus,
ServiceDnsRecord,
SupportLevel,
)
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.owned_path import OwnedPath, Bind
from selfprivacy_api.services.moving import (
@ -26,6 +36,8 @@ from selfprivacy_api.services.moving import (
DEFAULT_START_STOP_TIMEOUT = 5 * 60
logger = logging.getLogger(__name__)
class Service(ABC):
"""
@ -33,6 +45,8 @@ class Service(ABC):
can be installed, configured and used by a user.
"""
config_items: dict[str, "ServiceConfigItem"] = {}
@staticmethod
@abstractmethod
def get_id() -> str:
@ -65,21 +79,28 @@ class Service(ABC):
"""
pass
@staticmethod
@abstractmethod
def get_url() -> Optional[str]:
@classmethod
def get_url(cls) -> Optional[str]:
"""
The url of the service if it is accessible from the internet browser.
"""
pass
domain = get_domain()
subdomain = cls.get_subdomain()
return f"https://{subdomain}.{domain}"
@staticmethod
@abstractmethod
def get_subdomain() -> Optional[str]:
@classmethod
def get_subdomain(cls) -> Optional[str]:
"""
The assigned primary subdomain for this service.
"""
pass
name = cls.get_id()
with ReadUserData() as user_data:
if "modules" in user_data:
if name in user_data["modules"]:
if "subdomain" in user_data["modules"][name]:
return user_data["modules"][name]["subdomain"]
return DEFAULT_SUBDOMAINS.get(name)
@classmethod
def get_user(cls) -> Optional[str]:
@ -97,6 +118,11 @@ class Service(ABC):
"""
return cls.get_user()
@staticmethod
def is_always_active() -> bool:
"""`True` if the service cannot be stopped, which is true for api itself"""
return False
@staticmethod
@abstractmethod
def is_movable() -> bool:
@ -135,6 +161,47 @@ class Service(ABC):
with ReadUserData() as user_data:
return user_data.get("modules", {}).get(name, {}).get("enable", False)
@classmethod
def is_installed(cls) -> bool:
"""
`True` if the service is installed.
`False` if there is no module data in user data
"""
name = cls.get_id()
with ReadUserData() as user_data:
return user_data.get("modules", {}).get(name, {}) != {}
def is_system_service(self) -> bool:
"""
`True` if the service is a system service and should be hidden from the user.
`False` if it is not a system service.
"""
return False
def get_license(self) -> List[License]:
"""
The licenses of the service.
"""
return []
def get_homepage(self) -> Optional[str]:
"""
The homepage of the service.
"""
return None
def get_source_page(self) -> Optional[str]:
"""
The source page of the service.
"""
return None
def get_support_level(self) -> SupportLevel:
"""
The support level of the service.
"""
return SupportLevel.NORMAL
@staticmethod
@abstractmethod
def get_status() -> ServiceStatus:
@ -179,20 +246,24 @@ class Service(ABC):
"""Restart the service. Usually this means restarting systemd unit."""
pass
@staticmethod
@abstractmethod
def get_configuration():
pass
@classmethod
def get_configuration(cls):
return {
key: cls.config_items[key].as_dict(cls.get_id()) for key in cls.config_items
}
@staticmethod
@abstractmethod
def set_configuration(config_items):
pass
@staticmethod
@abstractmethod
def get_logs():
pass
@classmethod
def set_configuration(cls, config_items):
for key, value in config_items.items():
if key not in cls.config_items:
raise ValueError(f"Key {key} is not valid for {cls.get_id()}")
if cls.config_items[key].validate_value(value) is False:
raise ValueError(f"Value {value} is not valid for {key}")
for key, value in config_items.items():
cls.config_items[key].set_value(
value,
cls.get_id(),
)
@classmethod
def get_storage_usage(cls) -> int:
@ -206,6 +277,16 @@ class Service(ABC):
storage_used += get_storage_usage(folder)
return storage_used
@classmethod
def has_folders(cls) -> int:
"""
If there are no folders on disk, moving is noop
"""
for folder in cls.get_folders():
if exists(folder):
return True
return False
@classmethod
def get_dns_records(cls, ip4: str, ip6: Optional[str]) -> List[ServiceDnsRecord]:
subdomain = cls.get_subdomain()
@ -267,6 +348,10 @@ class Service(ABC):
)
return [owned_folder.path for owned_folder in cls.get_owned_folders()]
@classmethod
def get_folders_to_back_up(cls) -> List[str]:
return cls.get_folders()
@classmethod
def get_owned_folders(cls) -> List[OwnedPath]:
"""
@ -283,6 +368,9 @@ class Service(ABC):
def get_foldername(path: str) -> str:
return path.split("/")[-1]
def get_postgresql_databases(self) -> List[str]:
return []
# TODO: with better json utils, it can be one line, and not a separate function
@classmethod
def set_location(cls, volume: BlockDevice):
@ -327,7 +415,10 @@ class Service(ABC):
binds = self.binds()
if binds == []:
raise MoveError("nothing to move")
check_binds(current_volume_name, binds)
# It is ok if service is uninitialized, we will just reregister it
if self.has_folders():
check_binds(current_volume_name, binds)
def do_move_to_volume(
self,
@ -370,7 +461,18 @@ class Service(ABC):
service_name = self.get_display_name()
report_progress(0, job, "Performing pre-move checks...")
self.assert_can_move(volume)
if not self.has_folders():
self.set_location(volume)
Jobs.update(
job=job,
status=JobStatus.FINISHED,
result=f"{service_name} moved successfully (no folders).",
status_text=f"NOT starting {service_name}",
progress=100,
)
return job
report_progress(5, job, f"Stopping {service_name}...")
assert self is not None
@ -416,10 +518,16 @@ class Service(ABC):
group=group,
)
def pre_backup(self):
def pre_backup(self, job: Job):
pass
def post_restore(self):
def post_backup(self, job: Job):
pass
def pre_restore(self, job: Job):
pass
def post_restore(self, job: Job):
pass
@ -442,11 +550,15 @@ class StoppedService:
def __enter__(self) -> Service:
self.original_status = self.service.get_status()
if self.original_status not in [ServiceStatus.INACTIVE, ServiceStatus.FAILED]:
if (
self.original_status not in [ServiceStatus.INACTIVE, ServiceStatus.FAILED]
and not self.service.is_always_active()
):
try:
self.service.stop()
wait_until_true(
lambda: self.service.get_status() == ServiceStatus.INACTIVE,
lambda: self.service.get_status()
in [ServiceStatus.INACTIVE, ServiceStatus.FAILED],
timeout_sec=DEFAULT_START_STOP_TIMEOUT,
)
except TimeoutError as error:
@ -456,7 +568,10 @@ class StoppedService:
return self.service
def __exit__(self, type, value, traceback):
if self.original_status in [ServiceStatus.ACTIVATING, ServiceStatus.ACTIVE]:
if (
self.original_status in [ServiceStatus.ACTIVATING, ServiceStatus.ACTIVE]
and not self.service.is_always_active()
):
try:
self.service.start()
wait_until_true(

View file

@ -0,0 +1,514 @@
"""A Service implementation that loads all needed data from a JSON file"""
import base64
import logging
import json
import subprocess
from typing import List, Optional
from os.path import join, exists
from os import mkdir, remove
from selfprivacy_api.utils.postgres import PostgresDumper
from selfprivacy_api.jobs import Job, JobStatus, Jobs
from selfprivacy_api.models.services import (
License,
ServiceDnsRecord,
ServiceMetaData,
ServiceStatus,
SupportLevel,
)
from selfprivacy_api.services.flake_service_manager import FlakeServiceManager
from selfprivacy_api.services.generic_size_counter import get_storage_usage
from selfprivacy_api.services.owned_path import OwnedPath
from selfprivacy_api.services.service import Service
from selfprivacy_api.utils import ReadUserData, WriteUserData, get_domain
from selfprivacy_api.services.config_item import (
ServiceConfigItem,
StringServiceConfigItem,
BoolServiceConfigItem,
EnumServiceConfigItem,
IntServiceConfigItem,
)
from selfprivacy_api.utils.block_devices import BlockDevice, BlockDevices
from selfprivacy_api.utils.systemd import get_service_status_from_several_units
SP_MODULES_DEFENITIONS_PATH = "/etc/sp-modules"
SP_SUGGESTED_MODULES_PATH = "/etc/suggested-sp-modules"
logger = logging.getLogger(__name__)
def config_item_from_json(json_data: dict) -> Optional[ServiceConfigItem]:
"""Create a ServiceConfigItem from JSON data."""
weight = json_data.get("meta", {}).get("weight", 50)
if json_data["meta"]["type"] == "enable":
return None
if json_data["meta"]["type"] == "location":
return None
if json_data["meta"]["type"] == "string":
return StringServiceConfigItem(
id=json_data["name"],
default_value=json_data["default"],
description=json_data["description"],
regex=json_data["meta"].get("regex"),
widget=json_data["meta"].get("widget"),
allow_empty=json_data["meta"].get("allowEmpty", False),
weight=weight,
)
if json_data["meta"]["type"] == "bool":
return BoolServiceConfigItem(
id=json_data["name"],
default_value=json_data["default"],
description=json_data["description"],
widget=json_data["meta"].get("widget"),
weight=weight,
)
if json_data["meta"]["type"] == "enum":
return EnumServiceConfigItem(
id=json_data["name"],
default_value=json_data["default"],
description=json_data["description"],
options=json_data["meta"]["options"],
widget=json_data["meta"].get("widget"),
weight=weight,
)
if json_data["meta"]["type"] == "int":
return IntServiceConfigItem(
id=json_data["name"],
default_value=json_data["default"],
description=json_data["description"],
widget=json_data["meta"].get("widget"),
min_value=json_data["meta"].get("minValue"),
max_value=json_data["meta"].get("maxValue"),
weight=weight,
)
raise ValueError("Unknown config item type")
class TemplatedService(Service):
"""Class representing a dynamically loaded service."""
def __init__(self, service_id: str, source_data: Optional[str] = None) -> None:
if source_data:
self.definition_data = json.loads(source_data)
else:
# Check if the service exists
if not exists(join(SP_MODULES_DEFENITIONS_PATH, service_id)):
raise FileNotFoundError(f"Service {service_id} not found")
# Load the service
with open(join(SP_MODULES_DEFENITIONS_PATH, service_id)) as file:
self.definition_data = json.load(file)
# Check if required fields are present
if "meta" not in self.definition_data:
raise ValueError("meta not found in service definition")
if "options" not in self.definition_data:
raise ValueError("options not found in service definition")
# Load the meta data
self.meta = ServiceMetaData(**self.definition_data["meta"])
# Load the options
self.options = self.definition_data["options"]
# Load the config items
self.config_items = {}
for option in self.options.values():
config_item = config_item_from_json(option)
if config_item:
self.config_items[config_item.id] = config_item
# If it is movable, check for the location option
if self.meta.is_movable and "location" not in self.options:
raise ValueError("Service is movable but does not have a location option")
# Load all subdomains via options with "subdomain" widget
self.subdomain_options: List[str] = []
for option in self.options.values():
if option.get("meta", {}).get("widget") == "subdomain":
self.subdomain_options.append(option["name"])
def get_id(self) -> str:
# Check if ID contains elements that might be a part of the path
if "/" in self.meta.id or "\\" in self.meta.id:
raise ValueError("Invalid ID")
return self.meta.id
def get_display_name(self) -> str:
return self.meta.name
def get_description(self) -> str:
return self.meta.description
def get_svg_icon(self) -> str:
return base64.b64encode(self.meta.svg_icon.encode("utf-8")).decode("utf-8")
def get_subdomain(self) -> Optional[str]:
# If there are no subdomain options, return None
if not self.subdomain_options:
return None
# If primary_subdomain is set, try to find it in the options
if (
self.meta.primary_subdomain
and self.meta.primary_subdomain in self.subdomain_options
):
option_name = self.meta.primary_subdomain
# Otherwise, use the first subdomain option
else:
option_name = self.subdomain_options[0]
# Now, read the value from the userdata
name = self.get_id()
with ReadUserData() as user_data:
if "modules" in user_data:
if name in user_data["modules"]:
if option_name in user_data["modules"][name]:
return user_data["modules"][name][option_name]
# Otherwise, return default value for the option
return self.options[option_name].get("default")
def get_subdomains(self) -> List[str]:
# Return a current subdomain for every subdomain option
subdomains = []
with ReadUserData() as user_data:
for option in self.subdomain_options:
if "modules" in user_data:
if self.get_id() in user_data["modules"]:
if option in user_data["modules"][self.get_id()]:
subdomains.append(
user_data["modules"][self.get_id()][option]
)
continue
subdomains.append(self.options[option]["default"])
return subdomains
def get_url(self) -> Optional[str]:
if not self.meta.showUrl:
return None
subdomain = self.get_subdomain()
if not subdomain:
return None
return f"https://{subdomain}.{get_domain()}"
def get_user(self) -> Optional[str]:
if not self.meta.user:
return self.get_id()
return self.meta.user
def get_group(self) -> Optional[str]:
if not self.meta.group:
return self.get_user()
return self.meta.group
def is_movable(self) -> bool:
return self.meta.is_movable
def is_required(self) -> bool:
return self.meta.is_required
def can_be_backed_up(self) -> bool:
return self.meta.can_be_backed_up
def get_backup_description(self) -> str:
return self.meta.backup_description
def is_enabled(self) -> bool:
name = self.get_id()
with ReadUserData() as user_data:
return user_data.get("modules", {}).get(name, {}).get("enable", False)
def is_installed(self) -> bool:
name = self.get_id()
with FlakeServiceManager() as service_manager:
return name in service_manager.services
def get_license(self) -> List[License]:
return self.meta.license
def get_homepage(self) -> Optional[str]:
return self.meta.homepage
def get_source_page(self) -> Optional[str]:
return self.meta.source_page
def get_support_level(self) -> SupportLevel:
return self.meta.support_level
def get_status(self) -> ServiceStatus:
if not self.meta.systemd_services:
return ServiceStatus.INACTIVE
return get_service_status_from_several_units(self.meta.systemd_services)
def _set_enable(self, enable: bool):
name = self.get_id()
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if name not in user_data["modules"]:
user_data["modules"][name] = {}
user_data["modules"][name]["enable"] = enable
def enable(self):
"""Enable the service. Usually this means enabling systemd unit."""
name = self.get_id()
if not self.is_installed():
# First, double-check that it is a suggested module
if exists(SP_SUGGESTED_MODULES_PATH):
with open(SP_SUGGESTED_MODULES_PATH) as file:
suggested_modules = json.load(file)
if name not in suggested_modules:
raise ValueError("Service is not a suggested module")
else:
raise FileNotFoundError("Suggested modules file not found")
with FlakeServiceManager() as service_manager:
service_manager.services[name] = (
f"git+https://git.selfprivacy.org/SelfPrivacy/selfprivacy-nixos-config.git?ref=flakes&dir=sp-modules/{name}"
)
if "location" in self.options:
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if name not in user_data["modules"]:
user_data["modules"][name] = {}
if "location" not in user_data["modules"][name]:
user_data["modules"][name]["location"] = (
BlockDevices().get_root_block_device().name
)
self._set_enable(True)
def disable(self):
"""Disable the service. Usually this means disabling systemd unit."""
self._set_enable(False)
def start(self):
"""Start the systemd units"""
for unit in self.meta.systemd_services:
subprocess.run(["systemctl", "start", unit], check=False)
def stop(self):
"""Stop the systemd units"""
for unit in self.meta.systemd_services:
subprocess.run(["systemctl", "stop", unit], check=False)
def restart(self):
"""Restart the systemd units"""
for unit in self.meta.systemd_services:
subprocess.run(["systemctl", "restart", unit], check=False)
def get_configuration(self) -> dict:
# If there are no options, return an empty dict
if not self.config_items:
return {}
return {
key: self.config_items[key].as_dict(self.get_id())
for key in self.config_items
}
def set_configuration(self, config_items):
for key, value in config_items.items():
if key not in self.config_items:
raise ValueError(f"Key {key} is not valid for {self.get_id()}")
if self.config_items[key].validate_value(value) is False:
raise ValueError(f"Value {value} is not valid for {key}")
for key, value in config_items.items():
self.config_items[key].set_value(
value,
self.get_id(),
)
def get_storage_usage(self) -> int:
"""
Calculate the real storage usage of folders occupied by service
Calculate using pathlib.
Do not follow symlinks.
"""
storage_used = 0
for folder in self.get_folders():
storage_used += get_storage_usage(folder)
return storage_used
def has_folders(self) -> int:
"""
If there are no folders on disk, moving is noop
"""
for folder in self.get_folders():
if exists(folder):
return True
return False
def get_dns_records(self, ip4: str, ip6: Optional[str]) -> List[ServiceDnsRecord]:
display_name = self.get_display_name()
subdomains = self.get_subdomains()
# Generate records for every subdomain
records: List[ServiceDnsRecord] = []
for subdomain in subdomains:
if not subdomain:
continue
records.append(
ServiceDnsRecord(
type="A",
name=subdomain,
content=ip4,
ttl=3600,
display_name=display_name,
)
)
if ip6:
records.append(
ServiceDnsRecord(
type="AAAA",
name=subdomain,
content=ip6,
ttl=3600,
display_name=display_name,
)
)
return records
def get_drive(self) -> str:
"""
Get the name of the drive/volume where the service is located.
Example values are `sda1`, `vda`, `sdb`.
"""
root_device: str = BlockDevices().get_root_block_device().name
if not self.is_movable():
return root_device
with ReadUserData() as userdata:
if userdata.get("useBinds", False):
return (
userdata.get("modules", {})
.get(self.get_id(), {})
.get(
"location",
root_device,
)
)
else:
return root_device
def _get_db_dumps_folder(self) -> str:
# Get the drive where the service is located and append the folder name
return join("/var/lib/postgresql-dumps", self.get_id())
def get_folders(self) -> List[str]:
folders = self.meta.folders
owned_folders = self.meta.owned_folders
# Include the contents of folders list
resulting_folders = folders.copy()
for folder in owned_folders:
resulting_folders.append(folder.path)
return resulting_folders
def get_owned_folders(self) -> List[OwnedPath]:
folders = self.meta.folders
owned_folders = self.meta.owned_folders
resulting_folders = owned_folders.copy()
for folder in folders:
resulting_folders.append(self.owned_path(folder))
return resulting_folders
def get_folders_to_back_up(self) -> List[str]:
resulting_folders = self.meta.folders.copy()
if self.get_postgresql_databases():
resulting_folders.append(self._get_db_dumps_folder())
return resulting_folders
def set_location(self, volume: BlockDevice):
"""
Only changes userdata
"""
service_id = self.get_id()
with WriteUserData() as user_data:
if "modules" not in user_data:
user_data["modules"] = {}
if service_id not in user_data["modules"]:
user_data["modules"][service_id] = {}
user_data["modules"][service_id]["location"] = volume.name
def get_postgresql_databases(self) -> List[str]:
return self.meta.postgre_databases
def owned_path(self, path: str):
"""Default folder ownership"""
service_name = self.get_display_name()
try:
owner = self.get_user()
if owner is None:
# TODO: assume root?
# (if we do not want to do assumptions, maybe not declare user optional?)
raise LookupError(f"no user for service: {service_name}")
group = self.get_group()
if group is None:
raise LookupError(f"no group for service: {service_name}")
except Exception as error:
raise LookupError(
f"when deciding a bind for folder {path} of service {service_name}, error: {str(error)}"
)
return OwnedPath(
path=path,
owner=owner,
group=group,
)
def pre_backup(self, job: Job):
if self.get_postgresql_databases():
db_dumps_folder = self._get_db_dumps_folder()
# Create folder for the dumps if it does not exist
if not exists(db_dumps_folder):
mkdir(db_dumps_folder)
# Dump the databases
for db_name in self.get_postgresql_databases():
Jobs.update(
job,
status_text=f"Creating a dump of database {db_name}",
status=JobStatus.RUNNING,
)
db_dumper = PostgresDumper(db_name)
backup_file = join(db_dumps_folder, f"{db_name}.dump")
db_dumper.backup_database(backup_file)
def _clear_db_dumps(self):
db_dumps_folder = self._get_db_dumps_folder()
for db_name in self.get_postgresql_databases():
backup_file = join(db_dumps_folder, f"{db_name}.dump")
if exists(backup_file):
remove(backup_file)
unpacked_file = backup_file.replace(".gz", "")
if exists(unpacked_file):
remove(unpacked_file)
def post_backup(self, job: Job):
if self.get_postgresql_databases():
db_dumps_folder = self._get_db_dumps_folder()
# Remove the backup files
for db_name in self.get_postgresql_databases():
backup_file = join(db_dumps_folder, f"{db_name}.dump")
if exists(backup_file):
remove(backup_file)
def pre_restore(self, job: Job):
if self.get_postgresql_databases():
# Create folder for the dumps if it does not exist
db_dumps_folder = self._get_db_dumps_folder()
if not exists(db_dumps_folder):
mkdir(db_dumps_folder)
# Remove existing dumps if they exist
self._clear_db_dumps()
def post_restore(self, job: Job):
if self.get_postgresql_databases():
# Recover the databases
db_dumps_folder = self._get_db_dumps_folder()
for db_name in self.get_postgresql_databases():
if exists(join(db_dumps_folder, f"{db_name}.dump")):
Jobs.update(
job,
status_text=f"Restoring database {db_name}",
status=JobStatus.RUNNING,
)
db_dumper = PostgresDumper(db_name)
backup_file = join(db_dumps_folder, f"{db_name}.dump")
db_dumper.restore_database(backup_file)
else:
logger.error(f"Database dump for {db_name} not found")
raise FileNotFoundError(f"Database dump for {db_name} not found")
# Remove the dumps
self._clear_db_dumps()

View file

@ -1,10 +1,11 @@
"""Class representing Bitwarden service"""
import base64
import typing
import subprocess
from typing import List
from os import path
from pathlib import Path
# from enum import Enum
@ -24,6 +25,7 @@ class DummyService(Service):
startstop_delay = 0.0
backuppable = True
movable = True
fail_to_stop = False
# if False, we try to actually move
simulate_moving = True
drive = "sda1"
@ -32,6 +34,12 @@ class DummyService(Service):
cls.folders = folders
def __init__(self):
# Maybe init it with some dummy files right here
# currently it is done in a fixture but if we do it here
# then we can do some convenience methods of writing and reading
# from test files so that
# we can easily check integrity in numerous restore tests
super().__init__()
with open(self.status_file(), "w") as file:
file.write(ServiceStatus.ACTIVE.value)
@ -57,16 +65,6 @@ class DummyService(Service):
# return ""
return base64.b64encode(BITWARDEN_ICON.encode("utf-8")).decode("utf-8")
@staticmethod
def get_url() -> typing.Optional[str]:
"""Return service url."""
domain = "test.com"
return f"https://password.{domain}"
@staticmethod
def get_subdomain() -> typing.Optional[str]:
return "password"
@classmethod
def is_movable(cls) -> bool:
return cls.movable
@ -82,8 +80,11 @@ class DummyService(Service):
@classmethod
def status_file(cls) -> str:
dir = cls.folders[0]
# we do not REALLY want to store our state in our declared folders
return path.join(dir, "..", "service_status")
# We do not want to store our state in our declared folders
# Because they are moved and tossed in tests wildly
parent = Path(dir).parent
return path.join(parent, "service_status")
@classmethod
def set_status(cls, status: ServiceStatus):
@ -92,8 +93,18 @@ class DummyService(Service):
@classmethod
def get_status(cls) -> ServiceStatus:
filepath = cls.status_file()
if filepath in [None, ""]:
raise ValueError("We do not have a path for our test dummy status file!")
if not path.exists(filepath):
raise FileNotFoundError(filepath)
with open(cls.status_file(), "r") as file:
status_string = file.read().strip()
if status_string in [None, ""]:
raise NotImplementedError(
f"It appears our test service no longer has any status in the statusfile. Filename = {cls.status_file}, status string inside is '{status_string}' (quoted) "
)
return ServiceStatus[status_string]
@classmethod
@ -101,6 +112,10 @@ class DummyService(Service):
cls, new_status: ServiceStatus, delay_sec: float
):
"""simulating a delay on systemd side"""
if not isinstance(new_status, ServiceStatus):
raise ValueError(
f"received an invalid new status for test service. new status: {str(new_status)}"
)
if delay_sec == 0:
cls.set_status(new_status)
return
@ -144,14 +159,23 @@ class DummyService(Service):
when moved"""
cls.simulate_moving = enabled
@classmethod
def simulate_fail_to_stop(cls, value: bool):
cls.fail_to_stop = value
@classmethod
def stop(cls):
# simulate a failing service unable to stop
if not cls.get_status() == ServiceStatus.FAILED:
cls.set_status(ServiceStatus.DEACTIVATING)
cls.change_status_with_async_delay(
ServiceStatus.INACTIVE, cls.startstop_delay
)
if cls.fail_to_stop:
cls.change_status_with_async_delay(
ServiceStatus.FAILED, cls.startstop_delay
)
else:
cls.change_status_with_async_delay(
ServiceStatus.INACTIVE, cls.startstop_delay
)
@classmethod
def start(cls):
@ -163,18 +187,14 @@ class DummyService(Service):
cls.set_status(ServiceStatus.RELOADING) # is a correct one?
cls.change_status_with_async_delay(ServiceStatus.ACTIVE, cls.startstop_delay)
@staticmethod
def get_configuration():
@classmethod
def get_configuration(cls):
return {}
@staticmethod
def set_configuration(config_items):
@classmethod
def set_configuration(cls, config_items):
return super().set_configuration(config_items)
@staticmethod
def get_logs():
return ""
@staticmethod
def get_storage_usage() -> int:
storage_usage = 0

View file

@ -7,11 +7,23 @@ import os
import subprocess
import portalocker
import typing
import glob
from traceback import format_tb as format_traceback
from selfprivacy_api.utils.default_subdomains import (
DEFAULT_SUBDOMAINS,
RESERVED_SUBDOMAINS,
)
USERDATA_FILE = "/etc/nixos/userdata.json"
SECRETS_FILE = "/etc/selfprivacy/secrets.json"
DKIM_DIR = "/var/dkim/"
DKIM_DIR = "/var/dkim"
ACCOUNT_PATH_PATTERN = (
"/var/lib/acme/.lego/accounts/*/acme-v02.api.letsencrypt.org/*/account.json"
)
class UserDataFiles(Enum):
@ -133,6 +145,22 @@ def is_username_forbidden(username):
return False
def check_if_subdomain_is_taken(subdomain: str) -> bool:
"""Check if subdomain is already taken or reserved"""
if subdomain in RESERVED_SUBDOMAINS:
return True
with ReadUserData() as data:
for module in data["modules"]:
if (
data["modules"][module].get(
"subdomain", DEFAULT_SUBDOMAINS.get(module, "")
)
== subdomain
):
return True
return False
def parse_date(date_str: str) -> datetime.datetime:
"""Parse date string which can be in one of these formats:
- %Y-%m-%dT%H:%M:%S.%fZ
@ -199,3 +227,28 @@ def hash_password(password):
hashed_password = hashed_password.decode("ascii")
hashed_password = hashed_password.rstrip()
return hashed_password
def write_to_log(message):
with open("/etc/selfprivacy/log", "a") as log:
log.write(f"{datetime.datetime.now()} {message}\n")
log.flush()
os.fsync(log.fileno())
def pretty_error(e: Exception) -> str:
traceback = "/r".join(format_traceback(e.__traceback__))
return type(e).__name__ + ": " + str(e) + ": " + traceback
def read_account_uri() -> str:
account_file = glob.glob(ACCOUNT_PATH_PATTERN)
if not account_file:
raise FileNotFoundError(
f"No account files found matching: {ACCOUNT_PATH_PATTERN}"
)
with open(account_file[0], "r") as file:
account_info = json.load(file)
return account_info["registration"]["uri"]

View file

@ -1,4 +1,5 @@
"""A block device API wrapping lsblk"""
from __future__ import annotations
import subprocess
import json
@ -53,6 +54,7 @@ class BlockDevice:
def update_from_dict(self, device_dict: dict):
self.name = device_dict["name"]
self.path = device_dict["path"]
# TODO: maybe parse it as numbers, as in origin?
self.fsavail = str(device_dict["fsavail"])
self.fssize = str(device_dict["fssize"])
self.fstype = device_dict["fstype"]
@ -90,6 +92,14 @@ class BlockDevice:
def __hash__(self):
return hash(self.name)
def get_display_name(self) -> str:
if self.is_root():
return "System disk"
elif self.model == "Volume":
return "Expandable volume"
else:
return self.name
def is_root(self) -> bool:
"""
Return True if the block device is the root device.

View file

@ -0,0 +1,6 @@
import time
def get_ttl_hash(seconds=3600):
"""Return the same value withing `seconds` time period"""
return round(time.time() / seconds)

View file

@ -0,0 +1,22 @@
DEFAULT_SUBDOMAINS = {
"bitwarden": "password",
"gitea": "git",
"jitsi-meet": "meet",
"simple-nixos-mailserver": "",
"nextcloud": "cloud",
"ocserv": "vpn",
"pleroma": "social",
"roundcube": "roundcube",
"testservice": "test",
"monitoring": "",
}
RESERVED_SUBDOMAINS = [
"admin",
"administrator",
"api",
"auth",
"user",
"users",
"ntfy",
]

View file

@ -0,0 +1,12 @@
from selfprivacy_api.graphql.mutations.mutation_interface import (
GenericJobMutationReturn,
)
def api_job_mutation_error(error: Exception, code: int = 400):
return GenericJobMutationReturn(
success=False,
code=code,
message=str(error),
job=None,
)

View file

@ -1,4 +1,5 @@
"""MiniHuey singleton."""
from os import environ
from huey import RedisHuey

View file

@ -0,0 +1,461 @@
"""Prometheus monitoring queries."""
# pylint: disable=too-few-public-methods
import requests
import strawberry
from dataclasses import dataclass
from typing import Optional, Annotated, Union, List, Tuple
from datetime import datetime, timedelta
PROMETHEUS_URL = "http://localhost:9001"
@strawberry.type
@dataclass
class MonitoringValue:
timestamp: datetime
value: str
@strawberry.type
@dataclass
class MonitoringMetric:
metric_id: str
values: List[MonitoringValue]
@strawberry.type
class MonitoringQueryError:
error: str
@strawberry.type
class MonitoringValues:
values: List[MonitoringValue]
@strawberry.type
class MonitoringMetrics:
metrics: List[MonitoringMetric]
MonitoringValuesResult = Annotated[
Union[MonitoringValues, MonitoringQueryError],
strawberry.union("MonitoringValuesResult"),
]
MonitoringMetricsResult = Annotated[
Union[MonitoringMetrics, MonitoringQueryError],
strawberry.union("MonitoringMetricsResult"),
]
class MonitoringQueries:
@staticmethod
def _send_range_query(
query: str, start: int, end: int, step: int, result_type: Optional[str] = None
) -> Union[dict, MonitoringQueryError]:
try:
response = requests.get(
f"{PROMETHEUS_URL}/api/v1/query_range",
params={
"query": query,
"start": start,
"end": end,
"step": step,
},
timeout=0.8,
)
if response.status_code != 200:
return MonitoringQueryError(
error=f"Prometheus returned unexpected HTTP status code. Error: {response.text}. The query was {query}"
)
json = response.json()
if result_type and json["data"]["resultType"] != result_type:
return MonitoringQueryError(
error="Unexpected resultType returned from Prometheus, request failed"
)
return json["data"]
except Exception as error:
return MonitoringQueryError(
error=f"Prometheus request failed! Error: {str(error)}"
)
@staticmethod
def _send_query(
query: str, result_type: Optional[str] = None
) -> Union[dict, MonitoringQueryError]:
try:
response = requests.get(
f"{PROMETHEUS_URL}/api/v1/query",
params={
"query": query,
},
timeout=0.8,
)
if response.status_code != 200:
return MonitoringQueryError(
error=f"Prometheus returned unexpected HTTP status code. Error: {response.text}. The query was {query}"
)
json = response.json()
if result_type and json["data"]["resultType"] != result_type:
return MonitoringQueryError(
error="Unexpected resultType returned from Prometheus, request failed"
)
return json["data"]
except Exception as error:
return MonitoringQueryError(
error=f"Prometheus request failed! Error: {str(error)}"
)
@staticmethod
def _get_time_range(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
) -> Tuple[datetime, datetime]:
"""Get the start and end time for queries."""
if start is None:
start = datetime.now() - timedelta(minutes=20)
if end is None:
end = datetime.now()
return start, end
@staticmethod
def _prometheus_value_to_monitoring_value(x: Tuple[int, str]):
return MonitoringValue(timestamp=datetime.fromtimestamp(x[0]), value=x[1])
@staticmethod
def _clean_slice_id(slice_id: str, clean_id: bool) -> str:
"""Slices come in form of `/slice_name.slice`, we need to remove the `.slice` and `/` part."""
if clean_id:
parts = slice_id.split(".")[0].split("/")
if len(parts) > 1:
return parts[1]
else:
raise ValueError(f"Incorrect format slice_id: {slice_id}")
return slice_id
@staticmethod
def _prometheus_response_to_monitoring_metrics(
response: dict, id_key: str, clean_id: bool = False
) -> List[MonitoringMetric]:
if response["resultType"] == "vector":
return list(
map(
lambda x: MonitoringMetric(
metric_id=MonitoringQueries._clean_slice_id(
x["metric"].get(id_key, "/unknown.slice"),
clean_id=clean_id,
),
values=[
MonitoringQueries._prometheus_value_to_monitoring_value(
x["value"]
)
],
),
response["result"],
)
)
else:
return list(
map(
lambda x: MonitoringMetric(
metric_id=MonitoringQueries._clean_slice_id(
x["metric"].get(id_key, "/unknown.slice"), clean_id=clean_id
),
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
x["values"],
)
),
),
response["result"],
)
)
@staticmethod
def _calculate_offset_and_duration(
start: datetime, end: datetime
) -> Tuple[int, int]:
"""Calculate the offset and duration for Prometheus queries.
They mast be in seconds.
"""
offset = int((datetime.now() - end).total_seconds())
duration = int((end - start).total_seconds())
return offset, duration
@staticmethod
def cpu_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringValuesResult:
"""
Get CPU information.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying disk usage data.
"""
start, end = MonitoringQueries._get_time_range(start, end)
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = '100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)'
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringValues(
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
data["result"][0]["values"],
)
)
)
@staticmethod
def memory_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringValuesResult:
"""
Get memory usage.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying memory usage data.
"""
start, end = MonitoringQueries._get_time_range(start, end)
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = "100 - (100 * (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes))"
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringValues(
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
data["result"][0]["values"],
)
)
)
@staticmethod
def swap_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringValuesResult:
"""
Get swap memory usage.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying swap memory usage data.
"""
start, end = MonitoringQueries._get_time_range(start, end)
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = (
"100 - (100 * (node_memory_SwapFree_bytes / node_memory_SwapTotal_bytes))"
)
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringValues(
values=list(
map(
MonitoringQueries._prometheus_value_to_monitoring_value,
data["result"][0]["values"],
)
)
)
@staticmethod
def memory_usage_max_by_slice(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
) -> MonitoringMetricsResult:
"""
Get maximum memory usage for each service (i.e. systemd slice).
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
"""
start, end = MonitoringQueries._get_time_range(start, end)
offset, duration = MonitoringQueries._calculate_offset_and_duration(start, end)
if offset == 0:
query = f'max_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:])'
else:
query = f'max_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:] offset {offset}s)'
data = MonitoringQueries._send_query(query, result_type="vector")
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "id", clean_id=True
)
)
@staticmethod
def memory_usage_average_by_slice(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
) -> MonitoringMetricsResult:
"""
Get average memory usage for each service (i.e. systemd slice).
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
"""
start, end = MonitoringQueries._get_time_range(start, end)
offset, duration = MonitoringQueries._calculate_offset_and_duration(start, end)
if offset == 0:
query = f'avg_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:])'
else:
query = f'avg_over_time((container_memory_rss{{id!~".*slice.*slice", id=~".*slice"}}+container_memory_swap{{id!~".*slice.*slice", id=~".*slice"}})[{duration}s:] offset {offset}s)'
data = MonitoringQueries._send_query(query, result_type="vector")
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "id", clean_id=True
)
)
@staticmethod
def disk_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringMetricsResult:
"""
Get disk usage information.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying disk usage data.
"""
start, end = MonitoringQueries._get_time_range(start, end)
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = """100 - (100 * sum by (device) (node_filesystem_avail_bytes{fstype!="rootfs",fstype!="ramfs",fstype!="tmpfs",mountpoint!="/efi"}) / sum by (device) (node_filesystem_size_bytes{fstype!="rootfs",fstype!="ramfs",fstype!="tmpfs",mountpoint!="/efi"}))"""
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "device"
)
)
@staticmethod
def network_usage_overall(
start: Optional[datetime] = None,
end: Optional[datetime] = None,
step: int = 60, # seconds
) -> MonitoringMetricsResult:
"""
Get network usage information for both download and upload.
Args:
start (datetime, optional): The start time.
Defaults to 20 minutes ago if not provided.
end (datetime, optional): The end time.
Defaults to current time if not provided.
step (int): Interval in seconds for querying network data.
"""
start, end = MonitoringQueries._get_time_range(start, end)
start_timestamp = int(start.timestamp())
end_timestamp = int(end.timestamp())
query = """
label_replace(rate(node_network_receive_bytes_total{device!="lo"}[5m]), "direction", "receive", "device", ".*")
or
label_replace(rate(node_network_transmit_bytes_total{device!="lo"}[5m]), "direction", "transmit", "device", ".*")
"""
data = MonitoringQueries._send_range_query(
query, start_timestamp, end_timestamp, step, result_type="matrix"
)
if isinstance(data, MonitoringQueryError):
return data
return MonitoringMetrics(
metrics=MonitoringQueries._prometheus_response_to_monitoring_metrics(
data, "direction"
)
)

View file

@ -0,0 +1,34 @@
import subprocess
class PostgresDumper:
"""--dbname=postgresql://postgres@%2Frun%2Fpostgresql/pleroma"""
def __init__(self, db_name: str):
self.db_name = db_name
self.user = "postgres"
self.socket_dir = r"%2Frun%2Fpostgresql"
def backup_database(self, backup_file: str):
# Create the database dump in custom format
dump_command = [
"pg_dump",
f"--dbname=postgresql://{self.user}@{self.socket_dir}/{self.db_name}",
"--format=custom",
f"--file={backup_file}",
]
subprocess.run(dump_command, check=True)
return backup_file
def restore_database(self, backup_file: str):
restore_command = [
"pg_restore",
f"--dbname=postgresql://{self.user}@{self.socket_dir}",
"--clean",
"--create",
"--exit-on-error",
backup_file,
]
subprocess.run(restore_command, check=True)

View file

@ -1,15 +1,23 @@
import uuid
from datetime import datetime
from typing import Optional
from enum import Enum
def store_model_as_hash(redis, redis_key, model):
for key, value in model.dict().items():
model_dict = model.dict()
for key, value in model_dict.items():
if isinstance(value, uuid.UUID):
value = str(value)
if isinstance(value, datetime):
value = value.isoformat()
if isinstance(value, Enum):
value = value.value
redis.hset(redis_key, key, str(value))
value = str(value)
model_dict[key] = value
redis.hset(redis_key, mapping=model_dict)
def hash_as_model(redis, redis_key: str, model_class):

View file

@ -1,24 +1,35 @@
"""
Redis pool module for selfprivacy_api
"""
import redis
from selfprivacy_api.utils.singleton_metaclass import SingletonMetaclass
import redis
import redis.asyncio as redis_async
from redis.asyncio.client import PubSub
REDIS_SOCKET = "/run/redis-sp-api/redis.sock"
class RedisPool(metaclass=SingletonMetaclass):
class RedisPool:
"""
Redis connection pool singleton.
"""
def __init__(self):
self._dbnumber = 0
url = RedisPool.connection_url(dbnumber=self._dbnumber)
# We need a normal sync pool because otherwise
# our whole API will need to be async
self._pool = redis.ConnectionPool.from_url(
RedisPool.connection_url(dbnumber=0),
url,
decode_responses=True,
)
self._pubsub_connection = self.get_connection()
# We need an async pool for pubsub
self._async_pool = redis_async.ConnectionPool.from_url(
url,
decode_responses=True,
)
self._raw_pool = redis.ConnectionPool.from_url(url)
@staticmethod
def connection_url(dbnumber: int) -> str:
@ -34,8 +45,21 @@ class RedisPool(metaclass=SingletonMetaclass):
"""
return redis.Redis(connection_pool=self._pool)
def get_pubsub(self):
def get_raw_connection(self):
"""
Get a pubsub connection from the pool.
Get a raw connection from the pool.
"""
return self._pubsub_connection.pubsub()
return redis.Redis(connection_pool=self._raw_pool)
def get_connection_async(self) -> redis_async.Redis:
"""
Get an async connection from the pool.
Async connections allow pubsub.
"""
return redis_async.Redis(connection_pool=self._async_pool)
async def subscribe_to_keys(self, pattern: str) -> PubSub:
async_redis = self.get_connection_async()
pubsub = async_redis.pubsub()
await pubsub.psubscribe(f"__keyspace@{self._dbnumber}__:" + pattern)
return pubsub

View file

@ -0,0 +1 @@
SUBDOMAIN_REGEX = r"^[A-Za-z0-9][A-Za-z0-9\-]{0,61}[A-Za-z0-9]$"

View file

@ -3,6 +3,7 @@ Singleton is a creational design pattern, which ensures that only
one object of its kind exists and provides a single point of access
to it for any other code.
"""
from threading import Lock

Some files were not shown because too many files have changed in this diff Show more