Skip to content

Latest commit

 

History

History
139 lines (124 loc) · 3.72 KB

SPEC.md

File metadata and controls

139 lines (124 loc) · 3.72 KB

Sentinel Python SDK Specification

1. Package Structure

sentinel-python-sdk/
├── src/
│   └── sentinel/
│       ├── __init__.py
│       ├── init.py
│       ├── registration/
│       │   ├── __init__.py
│       │   ├── project.py
│       │   └── task.py
│       ├── wrappers/
│       │   ├── __init__.py
│       │   ├── anthropic.py
│       │   ├── openai.py
│       │   └── litellm.py
│       ├── converters/
│       │   ├── __init__.py
│       │   └── format_converter.py
│       └── api/
│           ├── __init__.py
│           └── client.py
├── tests/
├── docs/
├── setup.py
├── pyproject.toml
└── README.md

2. Dependencies

Required packages:
- requests>=2.28.0
- openai>=1.0.0
- anthropic>=0.3.0
- litellm>=0.1.0
- pydantic>=2.0.0
- python-dotenv>=1.0.0

3. Feature Specifications

3.1 Initialization (sentinel.init)

  • Function: init(api_key: str, project_id: Optional[str] = None)
  • Description: Initializes the SDK and performs all necessary setup
  • Process:
    1. Validates API key
    2. Sets up global configuration
    3. Calls register_project if project_id not provided
    4. Automatically patches supported LLM clients
  • Error handling:
    • Invalid API key
    • Network connectivity issues
    • Invalid project_id format

3.2 Project Registration

  • Function: register_project(name: str, metadata: Optional[Dict] = None) -> str
  • Description: Registers a new project with Sentinel
  • Returns: project_id
  • Error handling:
    • Duplicate project names
    • Invalid metadata format
    • Network errors

3.3 Task Registration

  • Function: register_task(name: str, project_id: str, metadata: Optional[Dict] = None) -> str
  • Description: Registers a new task within a project
  • Returns: task_id
  • Error handling:
    • Invalid project_id
    • Duplicate task names
    • Invalid metadata format

3.4 Client Wrapping

  • Implementation: Monkey-patching approach for each supported client
  • Supported clients:
    • OpenAI
    • Anthropic
    • LiteLLM
  • Wrapper functionality:
    1. Intercept client initialization
    2. Wrap completion/chat methods
    3. Call submit_request before API call
    4. Call submit_response after API response
    5. Maintain original client behavior

3.5 Format Conversion

  • Implement converters for each client format to OpenAI format
  • Standard format structure:
{
    "messages": List[Dict[str, str]],
    "model": str,
    "temperature": float,
    "max_tokens": int,
    "additional_params": Dict
}

4. Testing Requirements

  • Minimum test coverage: 90%
  • Test types:
    • Unit tests for all components
    • Integration tests for client wrapping
    • End-to-end tests for full workflow
  • Mock external API calls
  • Test all error cases
  • Performance testing for overhead measurement

5. Documentation Requirements

  • Docstring format: Google style
  • Documentation files:
    • README.md with quick start guide
    • API reference (auto-generated)
    • Integration guide
    • Examples directory
  • Code comments for complex logic
  • Type hints for all functions

6. Performance Considerations

  • Maximum latency overhead: 50ms per request
  • Asynchronous submission of metrics
  • Local caching of project/task IDs
  • Minimal memory footprint
  • Efficient format conversion

7. Compatibility:

  • Specify the Python versions the package should support
  • List any operating system or environment-specific considerations

8. Delivery Format:

  • Describe how the final package should be delivered (e.g., GitHub repository, PyPI package)
  • Specify any continuous integration or deployment requirements