Skip to content

chore: account init to update node admin key #1373

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Feb 15, 2025
71 changes: 63 additions & 8 deletions src/commands/account.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,20 @@
import {Flags as flags} from './flags.js';
import {Listr} from 'listr2';
import * as constants from '../core/constants.js';
import * as helpers from '../core/helpers.js';
import {FREEZE_ADMIN_ACCOUNT} from '../core/constants.js';
import {type AccountManager} from '../core/account_manager.js';
import {type AccountId, AccountInfo, HbarUnit, PrivateKey} from '@hashgraph/sdk';
import {type AccountId, AccountInfo, HbarUnit, NodeUpdateTransaction, PrivateKey} from '@hashgraph/sdk';
import {ListrLease} from '../core/lease/listr_lease.js';
import {type CommandBuilder} from '../types/aliases.js';
import {sleep} from '../core/helpers.js';
import {type CommandBuilder, type NodeAliases} from '../types/aliases.js';
import {resolveNamespaceFromDeployment} from '../core/resolvers.js';
import {Duration} from '../core/time/duration.js';
import {type NamespaceName} from '../core/kube/resources/namespace/namespace_name.js';
import {type DeploymentName} from '../core/config/remote/types.js';
import {Templates} from '../core/templates.js';
import {sleep} from '../core/helpers.js';
import {SecretType} from '../core/kube/resources/secret/secret_type.js';
import {Base64} from 'js-base64';

export class AccountCommand extends BaseCommand {
private readonly accountManager: AccountManager;
Expand All @@ -33,7 +37,7 @@
super(opts);

if (!opts || !opts.accountManager)
throw new IllegalArgumentError('An instance of core/AccountManager is required', opts.accountManager as any);

Check warning on line 40 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type

this.accountManager = opts.accountManager;
this.accountInfo = null;
Expand Down Expand Up @@ -68,7 +72,7 @@
try {
const privateKey = PrivateKey.fromStringDer(newAccountInfo.privateKey);
newAccountInfo.privateKeyRaw = privateKey.toStringRaw();
} catch (e: Error | any) {

Check warning on line 75 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

'e' is defined but never used

Check warning on line 75 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
this.logger.error(`failed to retrieve EVM address for accountId ${newAccountInfo.accountId}`);
}
}
Expand Down Expand Up @@ -109,7 +113,7 @@
return this.accountManager.accountInfoQuery(ctx.config.accountId);
}

async updateAccountInfo(ctx: any) {

Check warning on line 116 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
let amount = ctx.config.amount;
if (ctx.config.ed25519PrivateKey) {
if (
Expand Down Expand Up @@ -145,12 +149,13 @@
return await this.accountManager.transferAmount(constants.TREASURY_ACCOUNT_ID, toAccountId, amount);
}

async init(argv: any) {

Check warning on line 152 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
const self = this;

interface Context {
config: {
namespace: NamespaceName;
nodeAliases: NodeAliases;
};
updateSecrets: boolean;
accountsBatchedSet: number[][];
Expand All @@ -168,16 +173,17 @@
task: async (ctx, task) => {
self.configManager.update(argv);
const namespace = await resolveNamespaceFromDeployment(this.localConfig, this.configManager, task);
const config = {namespace};

if (!(await this.k8Factory.default().namespaces().has(namespace))) {
throw new SoloError(`namespace ${namespace.name} does not exist`);
}

// set config in the context for later tasks to use
ctx.config = config;
ctx.config = {
namespace: namespace,
nodeAliases: helpers.parseNodeAliases(this.configManager.getFlag(flags.nodeAliasesUnparsed)),
};

self.logger.debug('Initialized config', {config});
self.logger.debug('Initialized config', ctx.config);

await self.accountManager.loadNodeClient(
ctx.config.namespace,
Expand Down Expand Up @@ -216,7 +222,7 @@
{
title: 'Update special account key sets',
task: ctx => {
const subTasks: any[] = [];

Check warning on line 225 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
const realm = constants.HEDERA_NODE_ACCOUNT_ID_START.realm;
const shard = constants.HEDERA_NODE_ACCOUNT_ID_START.shard;
for (const currentSet of ctx.accountsBatchedSet) {
Expand Down Expand Up @@ -248,6 +254,55 @@
});
},
},
{
title: 'Update node admin key',
task: async ctx => {
const adminKey = PrivateKey.fromStringED25519(constants.GENESIS_KEY);
for (const nodeAlias of ctx.config.nodeAliases) {
const nodeId = Templates.nodeIdFromNodeAlias(nodeAlias);
const nodeClient = await self.accountManager.refreshNodeClient(
ctx.config.namespace,
nodeAlias,
self.getClusterRefs(),
this.configManager.getFlag<DeploymentName>(flags.deployment),
);

try {
let nodeUpdateTx = new NodeUpdateTransaction().setNodeId(nodeId);
const newPrivateKey = PrivateKey.generateED25519();

nodeUpdateTx = nodeUpdateTx.setAdminKey(newPrivateKey.publicKey);
nodeUpdateTx = nodeUpdateTx.freezeWith(nodeClient);
nodeUpdateTx = await nodeUpdateTx.sign(newPrivateKey);
const signedTx = await nodeUpdateTx.sign(adminKey);
const txResp = await signedTx.execute(nodeClient);
const nodeUpdateReceipt = await txResp.getReceipt(nodeClient);

self.logger.debug(`NodeUpdateReceipt: ${nodeUpdateReceipt.toString()} for node ${nodeAlias}`);

// save new key in k8s secret
const data = {
privateKey: Base64.encode(newPrivateKey.toString()),
publicKey: Base64.encode(newPrivateKey.publicKey.toString()),
};
await this.k8Factory
.default()
.secrets()
.create(
ctx.config.namespace,
Templates.renderNodeAdminKeyName(nodeAlias),
SecretType.OPAQUE,
data,
{
'solo.hedera.com/node-admin-key': 'true',
},
);
} catch (e) {
throw new SoloError(`Error updating admin key for node ${nodeAlias}: ${e.message}`, e);
}

Check warning on line 302 in src/commands/account.ts

View check run for this annotation

Codecov / codecov/patch

src/commands/account.ts#L301-L302

Added lines #L301 - L302 were not covered by tests
}
},
},
{
title: 'Display results',
task: ctx => {
Expand Down Expand Up @@ -290,7 +345,7 @@

try {
await tasks.run();
} catch (e: Error | any) {

Check warning on line 348 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
throw new SoloError(`Error in creating account: ${e.message}`, e);
} finally {
await this.closeConnections();
Expand All @@ -302,7 +357,7 @@
return true;
}

async create(argv: any) {

Check warning on line 360 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
const self = this;
const lease = await self.leaseManager.create();

Expand Down Expand Up @@ -382,7 +437,7 @@

try {
await tasks.run();
} catch (e: Error | any) {

Check warning on line 440 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
throw new SoloError(`Error in creating account: ${e.message}`, e);
} finally {
await lease.release();
Expand All @@ -392,7 +447,7 @@
return true;
}

async update(argv: any) {

Check warning on line 450 in src/commands/account.ts

View workflow job for this annotation

GitHub Actions / Code Style / Standard

Unexpected any. Specify a different type
const self = this;

interface Context {
Expand Down Expand Up @@ -565,7 +620,7 @@
.command({
command: 'init',
desc: 'Initialize system accounts with new keys',
builder: (y: any) => flags.setCommandFlags(y, flags.deployment),
builder: (y: any) => flags.setCommandFlags(y, flags.deployment, flags.nodeAliasesUnparsed),

Check warning on line 623 in src/commands/account.ts

View check run for this annotation

Codecov / codecov/patch

src/commands/account.ts#L623

Added line #L623 was not covered by tests
handler: (argv: any) => {
self.logger.info("==== Running 'account init' ===");
self.logger.info(argv);
Expand Down
22 changes: 19 additions & 3 deletions src/commands/node/tasks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -68,10 +68,11 @@
import {NetworkNodes} from '../../core/network_nodes.js';
import {container} from 'tsyringe-neo';
import * as helpers from '../../core/helpers.js';
import {type Optional, type SoloListrTask, type SoloListrTaskWrapper} from '../../types/index.js';
import {type Optional} from '../../types/index.js';
import {type DeploymentName} from '../../core/config/remote/types.js';
import {ConsensusNode} from '../../core/model/consensus_node.js';
import {type K8} from '../../core/kube/k8.js';
import {Base64} from 'js-base64';

export class NodeCommandTasks {
private readonly accountManager: AccountManager;
Expand Down Expand Up @@ -620,9 +621,24 @@
}

loadAdminKey() {
return new Task('Load node admin key', (ctx: any, task: ListrTaskWrapper<any, any, any>) => {
return new Task('Load node admin key', async (ctx: any, task: ListrTaskWrapper<any, any, any>) => {
const config = ctx.config;
config.adminKey = PrivateKey.fromStringED25519(constants.GENESIS_KEY);
if (ctx.config.nodeAlias) {
try {
// load nodeAdminKey form k8s if exist
const keyFromK8 = await this.k8Factory
.default()
.secrets()
.read(config.namespace, Templates.renderNodeAdminKeyName(config.nodeAlias));
const privateKey = Base64.decode(keyFromK8.data.privateKey);
config.adminKey = PrivateKey.fromStringED25519(privateKey);
} catch (e: Error | any) {
this.logger.debug(`Error in loading node admin key: ${e.message}, use default key`);
config.adminKey = PrivateKey.fromStringED25519(constants.GENESIS_KEY);
}

Check warning on line 638 in src/commands/node/tasks.ts

View check run for this annotation

Codecov / codecov/patch

src/commands/node/tasks.ts#L636-L638

Added lines #L636 - L638 were not covered by tests
} else {
config.adminKey = PrivateKey.fromStringED25519(constants.GENESIS_KEY);
}
});
}

Expand Down
4 changes: 4 additions & 0 deletions src/core/templates.ts
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,10 @@ export class Templates {
return `hedera-${nodeAlias}.crt`;
}

public static renderNodeAdminKeyName(nodeAlias: NodeAlias): string {
return `${nodeAlias}-admin`;
}

public static renderNodeFriendlyName(prefix: string, nodeAlias: NodeAlias, suffix = ''): string {
const parts = [prefix, nodeAlias];
if (suffix) parts.push(suffix);
Expand Down
18 changes: 17 additions & 1 deletion test/e2e/commands/account.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,9 @@ import {NamespaceName} from '../../../src/core/kube/resources/namespace/namespac
import {type NetworkNodes} from '../../../src/core/network_nodes.js';
import {container} from 'tsyringe-neo';
import {InjectTokens} from '../../../src/core/dependency_injection/inject_tokens.js';
import * as helpers from '../../../src/core/helpers.js';
import {Templates} from '../../../src/core/templates.js';
import * as Base64 from 'js-base64';

const defaultTimeout = Duration.ofSeconds(20).toMillis();

Expand All @@ -40,7 +43,7 @@ const testSystemAccounts = [[3, 5]];
const argv = getDefaultArgv(namespace);
argv[flags.namespace.name] = namespace.name;
argv[flags.releaseTag.name] = HEDERA_PLATFORM_VERSION_TAG;
argv[flags.nodeAliasesUnparsed.name] = 'node1';
argv[flags.nodeAliasesUnparsed.name] = 'node1,node2';
argv[flags.generateGossipKeys.name] = true;
argv[flags.generateTlsKeys.name] = true;
argv[flags.clusterRef.name] = TEST_CLUSTER;
Expand Down Expand Up @@ -112,6 +115,19 @@ e2eTestSuite(testName, argv, undefined, undefined, undefined, undefined, undefin
await accountManager.close();
});

it('Node admin key should have been updated, not eqaul to genesis key', async () => {
const nodeAliases = helpers.parseNodeAliases(argv[flags.nodeAliasesUnparsed.name]);
for (const nodeAlias of nodeAliases) {
const keyFromK8 = await k8Factory
.default()
.secrets()
.read(namespace, Templates.renderNodeAdminKeyName(nodeAlias));
const privateKey = Base64.decode(keyFromK8.data.privateKey);

expect(privateKey.toString()).not.to.equal(genesisKey.toString());
}
});

for (const [start, end] of testSystemAccounts) {
for (let i = start; i <= end; i++) {
it(`account ${i} should not have genesis key`, async () => {
Expand Down
27 changes: 21 additions & 6 deletions test/e2e/dual-cluster/README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,25 @@
# Local Dual Cluster Testing

This document describes how to test the dual cluster setup locally.

## Prerequisites
- Make sure you give your Docker sufficient resources
- ? CPUs
- ? GB RAM
- ? GB Swap
- ? GB Disk Space
- If you are tight on resources you might want to make sure that no other Kind clusters are running or anything that is resource heavy on your machine.

* Make sure you give your Docker sufficient resources
* ? CPUs
* ? GB RAM
* ? GB Swap
* ? GB Disk Space
* If you are tight on resources you might want to make sure that no other Kind clusters are running or anything that is resource heavy on your machine.

## Calling

```bash
# from your Solo root directory run:
./test/e2e/dual-cluster/setup-dual-e2e.sh
```

Output:

```bash
SOLO_CHARTS_DIR:
Deleting cluster "solo-e2e-c1" ...
Expand Down Expand Up @@ -180,28 +185,38 @@ metrics-server kube-system 1 2025-02-14 16:05:07.217358 +0000 UTC
solo-cluster-setup solo-setup 1 2025-02-14 16:05:58.114619 +0000 UTC deployed solo-cluster-setup-0.44.0 0.44.0
Switched to context "kind-solo-e2e-c1".
```

## Diagnostics

The `./diagnostics/cluster/deploy.sh` deploys a `cluster-diagnostics` deployment (and its pod) with a service that has its external IP exposed. It is deployed to both clusters, runs Ubuntu, and has most diagnostic software installed. After ran you can shell into the pod and use the container to run your own troubleshooting commands for verifying network connectivity between the two clusters or DNS resolution, etc.

Calling

```bash
# from your Solo root directory run:
$ ./test/e2e/dual-cluster/diagnostics/cluster/deploy.sh
```

Output:

```bash
namespace/cluster-diagnostics unchanged
configmap/cluster-diagnostics-cm unchanged
service/cluster-diagnostics-svc unchanged
deployment.apps/cluster-diagnostics unchanged
```

## Cleanup

Calling

```bash
# from your Solo root directory run:
kind delete clusters cluster1 cluster2
```

Output:

```bash
Deleted clusters: ["cluster1" "cluster2"]
```
Loading