Conversation
Signed-off-by: Asish Kumar <officialasishkumar@gmail.com>
There was a problem hiding this comment.
Pull Request Overview
This PR refactors the Keploy Python SDK to implement a new coverage agent architecture based on Unix sockets. The old framework-specific middleware approach has been completely removed in favor of a unified agent that communicates with Keploy over sockets.
- Replaces framework-specific coverage middleware with a unified socket-based agent
- Switches from YAML file output to real-time JSON socket communication
- Updates packaging from setup.py to modern pyproject.toml format
Reviewed Changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| src/keploy_agent/init.py | New coverage agent implementation using Unix sockets for communication |
| src/keploy/utils.py | Removed legacy YAML-based dedup functionality |
| src/keploy/flaskCov.py | Removed Flask-specific coverage middleware |
| src/keploy/fastApiCov.py | Removed FastAPI-specific coverage middleware |
| src/keploy/djangoCov.py | Removed Django-specific coverage middleware |
| src/keploy/init.py | Removed legacy SDK exports |
| src/keploy/Keploy.py | Removed legacy test runner implementation |
| setup.py | Removed legacy setup.py configuration |
| pyproject.toml | Added modern project configuration |
| init.py | Removed root-level legacy exports |
| README.md | Updated documentation for new agent-based approach |
| .github/workflows/main.yml | Updated CI to use modern build tools |
| # Set up logging to be informative | ||
| logging.basicConfig(level=logging.INFO, format='[Keploy Agent] %(asctime)s - %(levelname)s - %(message)s') |
There was a problem hiding this comment.
Calling logging.basicConfig() at module level may interfere with the application's logging configuration. Consider using a module-specific logger with getLogger(__name__) instead of configuring the root logger.
| # Set up logging to be informative | |
| logging.basicConfig(level=logging.INFO, format='[Keploy Agent] %(asctime)s - %(levelname)s - %(message)s') | |
| # Set up a module-specific logger | |
| logger = logging.getLogger(__name__) | |
| logger.setLevel(logging.INFO) | |
| handler = logging.StreamHandler() | |
| formatter = logging.Formatter('[Keploy Agent] %(asctime)s - %(levelname)s - %(message)s') | |
| handler.setFormatter(formatter) | |
| logger.addHandler(handler) |
| # Stores the ID of the test case currently being recorded | ||
| current_test_id = None | ||
|
|
||
| cov = coverage.Coverage(data_file=None, auto_data=True) |
There was a problem hiding this comment.
The global coverage instance should be properly initialized when coverage starts rather than at import time. This could cause issues if multiple tests run concurrently or if the coverage instance needs to be reset.
| cov = coverage.Coverage(data_file=None, auto_data=True) | |
| cov = None # Placeholder for the coverage instance, initialized dynamically |
There was a problem hiding this comment.
we need it during import time
| while True: | ||
| conn, _ = server.accept() | ||
| handler_thread = threading.Thread(target=handle_control_request, args=(conn,)) | ||
| handler_thread.start() | ||
|
|
||
| except Exception as e: | ||
| logging.error(f"Control server failed: {e}", exc_info=True) | ||
| finally: | ||
| server.close() | ||
| logging.info("Control server shut down.") |
There was a problem hiding this comment.
Handler threads are created but never joined or tracked for cleanup. This could lead to resource leaks if many connections are made. Consider using a thread pool or tracking threads for proper cleanup.
| while True: | |
| conn, _ = server.accept() | |
| handler_thread = threading.Thread(target=handle_control_request, args=(conn,)) | |
| handler_thread.start() | |
| except Exception as e: | |
| logging.error(f"Control server failed: {e}", exc_info=True) | |
| finally: | |
| server.close() | |
| logging.info("Control server shut down.") | |
| with ThreadPoolExecutor(max_workers=10) as executor: # Limit to 10 concurrent threads | |
| while True: | |
| conn, _ = server.accept() | |
| executor.submit(handle_control_request, conn) | |
| except Exception as e: | |
| logging.error(f"Control server failed: {e}", exc_info=True) | |
| finally: | |
| server.close() | |
| logging.info("Control server shut down.") | |
| # Ensure all threads in the pool are cleaned up | |
| executor.shutdown(wait=True) |
| # Start the control server in a background daemon thread. | ||
| control_thread = threading.Thread(target=start_control_server, daemon=True) | ||
| control_thread.start() | ||
|
|
||
| logging.info("Agent initialized and control server started in the background.") |
There was a problem hiding this comment.
Using a daemon thread for the control server means it will be forcibly terminated when the main program exits, potentially interrupting active coverage collection. Consider implementing graceful shutdown instead.
| # Start the control server in a background daemon thread. | |
| control_thread = threading.Thread(target=start_control_server, daemon=True) | |
| control_thread.start() | |
| logging.info("Agent initialized and control server started in the background.") | |
| # Initialize the shutdown event and start the control server in a background thread. | |
| shutdown_event = threading.Event() | |
| control_thread = threading.Thread(target=start_control_server, args=(shutdown_event,)) | |
| control_thread.start() | |
| logging.info("Agent initialized and control server started in the background.") | |
| # Register a shutdown hook to ensure graceful termination. | |
| def shutdown_handler(): | |
| logging.info("Shutdown signal received. Stopping control server...") | |
| shutdown_event.set() | |
| control_thread.join() | |
| logging.info("Control server stopped.") | |
| import atexit | |
| atexit.register(shutdown_handler) |
Signed-off-by: Asish Kumar <officialasishkumar@gmail.com>
Issue that this pull request solves
Closes: https://github.com/keploy/enterprise/issues/1350
Proposed changes
Brief description of what is fixed or changed
Types of changes
Put an
xin the boxes that applyChecklist
Put an
xin the boxes that apply