SPS Validator Development Update and Invoice #4

in Splinterlandsyesterday

Validators Development #4

JPTR Corporation

This document offers a clear and thoughtful summary of the tickets we’ve successfully closed over the past 30 days, along with an overview of the work currently in progress. It’s designed to give you a comprehensive snapshot of our significant achievements, all presented in an easy-to-read format, with insights summarized by ChatGPT from Jira issues.

We hope this overview highlights the hard work and dedication that has gone into each task, ensuring that our development efforts are both transparent and impactful.

A payment of $40,000.00 is required to be made to JPTR, Corporation. Please ensure the payment is made in the form of USDC to the Ethereum address:
0x57d917726073D7582022897F753B034aA593220c.

Note: Before submitting the full amount, please send a minimal test transaction to verify the accuracy of the transfer. Once confirmed, proceed with the full payment.

Forward: We are thrilled to announce that next week marks the commencement of our internal community testing phase, where selected engineers from the DAO will have access to test and validate our services. This collaboration represents a significant milestone in our development journey, enabling us to gather invaluable feedback and ensure that our systems are robust, secure, and fully optimized for broader community participation. We deeply appreciate the DAO’s support and the enthusiastic involvement of our community members, and we are confident that this joint effort will drive the Splinterlands ecosystem to new heights of performance and reliability. Stay tuned for more updates as we embark on this exciting testing phase together!

TL;DR

● Setup Refactor: Refactored snapshot restore and generation processes for better handling of SteemMonsters data, streamlined setup scripts, and ensured seamless integration between SM snapshots and validator snapshots.
● Infrastructure Design Improvements: Developed and tested a robust snapshot migration process, enhanced setup scripts for easier validator deployment, and created an API version of the validator container for scalable API capacity.
● Testing and QA: Continued simulated production validator cutover, ensured data integrity by pointing SM services to backed-up production data, and conducted comprehensive integration tests to verify balance accuracy and system functionality.
● Community Engagement: Prepared for upcoming open community testing, providing detailed requirements for running validators and databases to facilitate smooth participation.

Validator Specs:

Validator Node (without db, and not acting as an API server):

  • minimum: 1 cpu / 2gb of RAM
  • recommended: 2 cpu / 4gb of RAM
    Database (non archive node, and not acting as an API server):
  • minimum: 2 cpu / 8gb of RAM / 100Gb of disk space
  • recommended: 4cpu / 16gb of RAM / 100gb of disk space

Validator Setup Enhancements: Refactoring and Optimization

  1. Refactored Validator Snapshot Restore Process
    To enhance the reliability and efficiency of our validator setup, we have extensively refactored the validator snapshot restore process:
    ● Improved Handling of SteemMonsters Data: The snapshot restore process now better manages SteemMonsters (SM) data, ensuring that all relevant information is accurately captured and restored.
    ● Error Reduction: By refining the restore logic, we have minimized the potential for errors during the restoration process, leading to more stable validator operations.
  2. Enhanced SteemMonsters Snapshot Generation
    We have overhauled the snapshot generation process for SteemMonsters to produce higher quality data for the validator:
    ● Data Integrity: Ensured that the snapshots contain complete and accurate data, which is crucial for validator performance and reliability.
    ● Optimization: Streamlined the snapshot generation workflow to reduce processing time and resource consumption, making it more efficient and scalable.
  3. Improved Validator Setup Scripts
    Our setup scripts have been significantly improved to simplify the validator deployment process:
    ● Elimination of Start Block Requirement: Validators no longer require a specific start block, as all necessary data is now pulled directly from the SM snapshot. ● Unified Snapshot Control: The snapshot now fully controls the setup process, ensuring consistency whether using an SM snapshot or a validator-specific snapshot. This
    unification eliminates discrepancies and simplifies the deployment process for both developers and community members.
  4. Seamless Integration Between SM and Validator Snapshots
    By ensuring there is no difference between SM snapshots and validator snapshots, we have achieved:
    ● Consistency: Both snapshot types are handled uniformly, reducing complexity and potential points of failure.
    ● Ease of Use: Simplifies the setup process, making it more accessible for community members to run their own validators without needing specialized knowledge for different snapshot types.

Infrastructure Improvements: Continued Snapshot Migration and Script Enhancements

  1. Developed Snapshot Migration Process
    To facilitate smooth transitions between different database environments, we have built out a comprehensive snapshot migration process:
    ● Automated Backup and Restore: Implemented automated procedures to back up the SteemMonsters production database, load it into the test environment, build a validator snapshot, and restore it to the test environment validator.
    ● Data Consistency: Ensured that all data migrated through snapshots remains consistent and intact, preserving the integrity of user balances and transaction histories. ● Data Transformation: Continued to test that all blocks and transactions are accurate from the snapshot to the validation environment. This ensures the accuracy of the pipeline Extract, Translate, Load (ETL) across all environments.
  2. Enhanced Setup Scripts with New Snapshot Data
    Our setup scripts have been updated to leverage the new snapshot data effectively:
    ● Streamlined Initialization: Scripts now automatically pull all necessary data from the SM snapshot, eliminating the need for manual configuration and reducing setup time. ● Robust Error Handling: Improved error detection and handling within the scripts to address any issues that arise during the setup process promptly.
  3. Created API Version of Validator Container
    To enhance scalability and manage API capacity more efficiently, we have developed an API version of the validator container:
    ● API Mode Operation: This mode allows the validator to function solely as an API server, decoupling it from node operations and enabling independent scaling based on demand. ● Scalable Architecture: Facilitates the scaling of API services without the need to deploy additional validator nodes, optimizing resource usage and cost.
    ● Load Balancing Integration: Implemented load balancing to distribute API requests evenly, ensuring consistent performance even under high traffic conditions.
  4. Snapshot Process Optimization
    Further optimizations to the snapshot process include:
    ● Faster Snapshots: Reduced the time required to generate and restore snapshots, enhancing overall system responsiveness.
    ● Resource Efficiency: Minimized the resource footprint during snapshot operations, allowing for more efficient use of hardware and cloud resources.

Continued Testing and QA: Ensuring Robust Integration and Data Integrity

  1. Continued Simulated Production Validator Cutover
    We continued to conduct a simulation of the production validator cutover to ensure readiness for live deployment:
    ● Database Backup: Successfully backed up the SteemMonsters production database, ensuring that no data is lost during the migration.
    ● Test Environment Setup: Loaded the backed-up data into the test environment, built a validator snapshot, and restored it to the test environment validator.
    ● Validation: Verified that the restored validator operates correctly within the test environment, maintaining data integrity and functionality.
  2. Enhanced Testing with Backed-Up Production Data
    To ensure comprehensive testing, we have pointed SteemMonsters services in the test environment to use the backed-up production data:
    ● Realistic Testing Scenarios: By using actual production data, we can run more accurate and meaningful tests, identifying potential issues that might not surface with synthetic data.
    ● Balance Verification: Conducted extensive tests to ensure that user balances match up correctly, maintaining consistency between the SteemMonsters and validator systems.
  3. Comprehensive Integration Testing
    We have performed detailed integration tests with the validator running in the test environment:
    ● Transaction Validation: Ensured that all transactions are correctly validated by the validator, preventing discrepancies and maintaining system integrity.
    ● Reward Distribution: Verified that rewards are accurately distributed from the reward pools, aligning with the expected distribution criteria.
    ● System Stability: Monitored system performance and stability under various test conditions, ensuring that the validator integration does not introduce any instability.
  4. Continued Running Tests in Test Environment
    Active testing in the test environment includes:
    ● Balance Matching: Continuously checking that user balances in the test environment align with those in the production database, ensuring accurate data replication. ● Functionality Verification: Confirming that all validator-related functionalities, such as staking, unstaking, and reward claiming, operate seamlessly alongside SteemMonsters services.
  5. QA Improvements
    Incorporating QA feedback to drive improvements:
    ● Issue Identification: Identified and addressed issues discovered during testing, refining the validator setup and integration processes.
    ● Iterative Enhancements: Implemented iterative improvements based on QA findings, enhancing overall system performance and reliability.

Community Engagement: Preparing for Open Community Testing

  1. Preparation for Community Testing
    We are gearing up for open community testing scheduled for the following week. This phase is crucial for gathering real-world feedback and ensuring that the validator system performs reliably under diverse conditions. We look forward to getting our testing users onboarded.
  2. Validator Node Requirements
    To participate in community testing, validators need to meet specific hardware requirements:
    ● Minimum Specifications:
    ○ CPU: 1 core
    ○ RAM: 2 GB
    ● Recommended Specifications:
    ○ CPU: 2 cores
    ○ RAM: 4 GB
  3. Database Requirements for Validators
    In addition to the validator node, a dedicated database is required:
    ● Minimum Specifications:
    ○ CPU: 2 cores
    ○ RAM: 8 GB
    ○ Disk Space: 100 GB
    ● Recommended Specifications:
    ○ CPU: 4 cores
    ○ RAM: 16 GB
    ○ Disk Space: 100 GB
    ● Usage: These databases will operate as non-archive nodes and will not serve as API servers, ensuring optimized performance for validator operations.
  4. Simplified Setup Process
    Our refactored setup scripts and unified snapshot control ensure that community members can easily set up their validators without needing to manage different snapshot types.
Sort:  

This post has been supported by @Splinterboost with a 20% upvote! Delagate HP to Splinterboost to Earn Daily HIVE rewards for supporting the @Splinterlands community!

Delegate HP | Join Discord

Congratulations @jptrcorp! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You received more than 200 upvotes.
Your next target is to reach 300 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP