diff --git a/.gitignore b/.gitignore index 75d97f8..dda7491 100644 --- a/.gitignore +++ b/.gitignore @@ -6,3 +6,5 @@ build venv /network/secret.py /debugger.log +venv-3.8 +getcwd_test.py \ No newline at end of file diff --git a/README.md b/README.md index c1b0d7f..7f33d2f 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ This fork uses new methods to drop packets to R*-owned resources that are likely By simply observing network activity when playing GTA Online, it was discovered that while all packets were encrypted, the "type" of packet can still be determined from simply checking the packet's payload size. Guardian already uses PyDivert which conveniently supports filtering on individual packets, so only a few minor modifications to the filtering rules were necessary to produce this fork which supports Online 1.54 and onwards. -### [Download 3.1.0b4 (latest)](https://gitlab.com/Speyedr/guardian-fastload-fix/-/raw/master/public_builds/guardian-3.1.0b4-fastload-fix.zip) +### [Download 3.1.0b5 (latest)](https://gitlab.com/Speyedr/guardian-fastload-fix/-/raw/master/public_builds/guardian-3.1.0b5-fastload-fix.zip) ## Usage To increase the chance of a successful session, it is recommended that you follow these instructions: @@ -24,7 +24,28 @@ To increase the chance of a successful session, it is recommended that you follo 7. Once your friends are loading into your session (they've confirmed they want to join your session and are now in the clouds), start a **Locked Session**. \- While a session is Locked, no one will be able to join the session, but those already connecting / connected should remain. -Guardian _may_ work in other circumstances / setups, but are also less likely to produce secured sessions. +Guardian _may_ work in other circumstances / setups, but is less likely to produce secured sessions. + +## Session Types + +Guardian has many different kinds of sessions, each with different behaviours intended to be used under different circumstances. + +The most important requirement for securing a session with Guardian is that you are the **"session host"**. You can still use Guardian to block packets as a non-host player, but improper use of session types as a non-host will likely get you disconnected from the session. + +- _Solo Session_ + - The strictest firewall, intended for use when you plan to only play by yourself. No one can connect to your game session, but critical R* related services and anything SocialClub related will still be let through. If you are in a session with any other player, they will lose connection to you. + +- _Whitelisted Session_ + - Only IP addresses in your Custom list `Lists -> Custom -> Add` will be allowed to connect to you. If you are the host of a session, anyone not on your Custom list will likely lose connection to the session. If you are non-host and enable this type of session and another player in your game session is not on your Custom list (whether already in the session or joining some time later), you will lose connection to everyone else, as your client won't be able to communicate with that player and you do not have host privileges to keep them out of the session. + +- _Blacklisted Session_ + - IP addresses in your Blacklist list `Lists -> Blacklist -> Add` will not be allowed to connect to you. If a connection is routed through R* servers, that connection will also be blocked as a security measure. This mode is _not recommended_ as GTA V has custom routing if only a handful of IP addresses are blocked. + +- _Auto Whitelisted Session_ + - Similar to _Whitelisted Session_, but everybody in the session is temporarily added to your whitelist, which means they won't be kicked. Any automatically collected IPs will be lost once the session ends, and there is (currently) no way to save them. Any connection that is believed to be a custom route (also known as a "Rockstar Tunnel") will be flagged, and you will be asked if you want to save these IPs to the temporary whitelist or not. If you do decide to save these IPs, players attempting to connect to you may be routed through these tunnels and may bypass your intended whitelist. + +- _Locked Session_ + - This mode blocks all new connections, preventing new players from entering the game session. Anyone already in the game session remains, and this mode prevents people from entering the session through a "Rockstar Tunnel" while allowing anyone currently being tunnelled to remain in the game session. However, if a player leaves they will not be able to get back in (unless you stop the _Locked Session_, of course). Enabling this mode as a non-host does _not_ drop you from a session. ## Motivation @@ -43,10 +64,10 @@ I never quite liked the idea of firewalled sessions, however modders were target ## Requirements #### System -- Python 3.6+ 64 bit -- Windows Vista/7/8/10 or Windows Server 2008 64 bit +- Python 3.9+ 64 bit +- Windows 8/10/11 or Windows Server 2012 64 bit - Administrator Privileges -#### Packages +#### Packages `(only if building from source)` - See [requirements.txt](requirements.txt) - To install these required packages, run `pip install -r requirements.txt` in your command prompt in the context of the virtual environment you will also be using to run `setup.py`. @@ -60,6 +81,10 @@ I never quite liked the idea of firewalled sessions, however modders were target - No reverse engineering of any R*-owned Intellectual Property was undertaken to produce this fork. - No decryption (nor any similar attack on encryption used to secure GTA Online) was performed to investigate packets. +## Support +- [**> Open an issue**](https://gitlab.com/Speyedr/guardian-fastload-fix/-/issues/new) +- [**> Join the Discord server**](https://discord.gg/6FzKCh4j4v) + ## Credits (for this fork) #### Developers - [**DintNL**](https://gitlab.com/DintNL): Co-dev, pointed me in the right direction, suggested I make this fork. diff --git a/app.py b/app.py index 288b927..a8acff8 100644 --- a/app.py +++ b/app.py @@ -1,4 +1,7 @@ from __future__ import print_function, unicode_literals +import random +import string +import traceback from questionary import Validator, ValidationError, prompt from prompt_toolkit.styles import Style import os @@ -7,9 +10,9 @@ from network.blocker import * import pydivert import sys -from multiprocessing import freeze_support +from multiprocessing import freeze_support, Manager import ipaddress -from network import networkmanager +from network import networkmanager, sessioninfo from distutils.version import StrictVersion import webbrowser import socket @@ -20,6 +23,10 @@ import logging import util.DynamicBlacklist # new Azure-blocking functionality from requests import RequestException +from pathlib import Path # save local azure file copy +from util.WorkingDirectoryFix import wd_fix # workaround for python's working directory jank + +wd_fix() # Fix working directory before doing literally anything else logger = logging.getLogger('guardian') logger.propagate = False @@ -37,7 +44,7 @@ ipv4 = re.compile(r"((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.|$)){4}") domain = re.compile(r"^[a-z]+([a-z0-9-]*[a-z0-9]+)?(\.([a-z]+([a-z0-9-]*[\[a-z0-9]+)?)+)*$") -version = '3.1.0b4' +version = '3.1.0b5' style = Style([ ('qmark', 'fg:#00FFFF bold'), # token in front of the question @@ -103,6 +110,8 @@ def validate(self, document): except (ipaddress.AddressValueError, socket.gaierror): raise error + # TODO: Add an extra validator to check if an IP could be used by R* services (i.e. it's part of Microsoft Azure) + @staticmethod def validate_get(text): error = ValidationError(message='Not a valid IP or URL', @@ -160,6 +169,23 @@ def validate(self, document): cursor_position=len(document.text)) # Move cursor to end +def crash_report(exception, additional=None, filename=None): + if filename is None: + filename = f"crashreport_{str(hex(int(time.time_ns())))[2:]}.log" + + handle = open(filename, 'w') + + handle.write(f"Report local time: {time.asctime(time.localtime())}\nReport UTC time: {time.asctime(time.gmtime())}\n\n") + handle.write(f"Error: {str(exception)}\n\n") + handle.write(f"{traceback.format_exc()}\n") + + if additional is not None: + handle.write(f"\nAdditional info: {str(additional)}\n") + + handle.close() + return + + def main(): global cloud, config, custom_ips, blacklist, friends, dynamic_blacklist while True: @@ -194,7 +220,7 @@ def main(): 'value': 'whitelist', }, { - 'name': 'Blacklisted session [Not working]', + 'name': 'Blacklisted session [' + ('Experimental' if len(dynamic_blacklist) > 0 else 'Not working') + ']', 'value': 'blacklist', }, { @@ -206,11 +232,7 @@ def main(): 'value': 'lock_session', }, { - 'name': 'Lock session w/ Whitelist [Experimental]', - 'value': 'lock_whitelist', - }, - { - 'name': 'Kick unknowns [Untested]', + 'name': 'Kick unknowns [Unstable]', 'value': 'kick' }, { @@ -222,13 +244,17 @@ def main(): 'value': 'lists' }, { - 'name': 'Kick by IP [Untested]', + 'name': 'Kick by IP [Unstable]', 'value': 'kick_by_ip' }, { 'name': 'Token', 'value': 'token' }, + { + 'name': 'Discord', + 'value': 'discord' + }, { 'name': 'Support zip', 'value': 'support_zip' @@ -248,281 +274,457 @@ def main(): option = answer['option'] if option == 'solo': - logger.info('Starting solo session') - print_white('Running: "' + - Fore.LIGHTCYAN_EX + 'Solo session' + - Fore.LIGHTWHITE_EX + '" Press "' + Fore.LIGHTCYAN_EX + 'CTRL + C' + - Fore.LIGHTWHITE_EX + '" to stop.') + print_white('SOLO SESSION:\n') + print('No one can connect to your game session,\n' + 'but critical R* and SocialClub activity\n' + 'will still get through.\n\n' + 'If you are in a session with any other player,\n' + 'they will lose connection to you.\n') - packet_filter = Whitelist(ips=[]) - try: - packet_filter.start() - while True: - time.sleep(10) # this is still very terrible - except KeyboardInterrupt: - packet_filter.stop() - logger.info('Stopped solo session') - print_white('Stopped: "' + - Fore.LIGHTCYAN_EX + 'Solo session' + - Fore.LIGHTWHITE_EX + '"') - continue + options = { + 'type': 'list', + 'name': 'option', + 'message': 'Do you want to start this type of session?', + 'qmark': '@', + 'choices': [ + { + 'name': 'Yes, start', + 'value': 'start' + }, + { + 'name': 'No, go back', + 'value': 'back' + } + ] + } - elif option == 'whitelist': - local_ip = get_private_ip() - ip_set = {local_ip} - public_ip = get_public_ip() - if public_ip: - ip_set.add(public_ip) - else: - print_white('Failed to get Public IP. Running without.') + answer = prompt(options, style=style, ) + if answer: + os.system('cls') + option = answer['option'] - for ip, friend in custom_ips: - if friend.get('enabled'): + if option == 'start': + + logger.info('Starting solo session') + print_white('Running: "' + + Fore.LIGHTCYAN_EX + 'Solo session' + + Fore.LIGHTWHITE_EX + '" Press "' + Fore.LIGHTCYAN_EX + 'CTRL + C' + + Fore.LIGHTWHITE_EX + '" to stop.') + + packet_filter = Whitelist(ips=[]) try: - ip_calc = IPValidator.validate_get(ip) - ip_set.add(ip_calc) - except ValidationError: - logger.warning('Not valid IP or URL: {}'.format(ip)) - print_white('Not valid IP or URL: "' + - Fore.LIGHTCYAN_EX + '{}'.format(ip) + + packet_filter.start() + while True: + time.sleep(10) # this is still very terrible + except KeyboardInterrupt: + packet_filter.stop() + logger.info('Stopped solo session') + print_white('Stopped: "' + + Fore.LIGHTCYAN_EX + 'Solo session' + Fore.LIGHTWHITE_EX + '"') continue - for ip, friend in friends: - if friend.get('enabled'): - ip_set.add(ip) + elif option == 'whitelist': + print_white('WHITELISTED SESSION:\n') + print('Only IP addresses in your Custom list\n' + 'will be allowed to connect to you.\n\n' + 'If you are the host of a session,\n' + 'anyone not on your Custom list will\n' + 'likely lose connection to the session.\n\n' + 'If you are non-host (and any player\n' + 'in the session is not on your Custom\n' + 'list), you will lose connection to everyone else.\n') - logger.info('Starting whitelisted session with {} IPs'.format(len(ip_set))) - print_white('Running: "' + - Fore.LIGHTCYAN_EX + 'Whitelisted session' + - Fore.LIGHTWHITE_EX + '" Press "' + - Fore.LIGHTCYAN_EX + 'CTRL + C' + - Fore.LIGHTWHITE_EX + '" to stop.') + options = { + 'type': 'list', + 'name': 'option', + 'message': 'Do you want to start this type of session?', + 'qmark': '@', + 'choices': [ + { + 'name': 'Yes, start', + 'value': 'start' + }, + { + 'name': 'No, go back', + 'value': 'back' + } + ] + } - """ Set up packet_filter outside the try-catch so it can be safely referenced inside KeyboardInterrupt.""" - packet_filter = Whitelist(ips=ip_set) + answer = prompt(options, style=style, ) + if answer: + os.system('cls') + option = answer['option'] - print("Experimental support for Online 1.54+ developed by Speyedr.\n") - #"Not working? Found a bug?", "https://gitlab.com/Speyedr/guardian-fastload-fix/-/issues", - #"(Pressing ENTER will open the link in your web browser.)", sep="\n") + if option == 'start': - try: - packet_filter.start() - while True: - time.sleep(10) # this is still very terrible but might be good enough for now? - #input() - # if we reach here then the user pressed ENTER - #webbrowser.open("https://gitlab.com/Speyedr/guardian-fastload-fix/-/issues") - #time.sleep(1) # prevents the user from opening the page a ludicrous amount of times? - except KeyboardInterrupt: - packet_filter.stop() - logger.info('Stopped whitelisted session') - print_white('Stopped: "' + - Fore.LIGHTCYAN_EX + 'Whitelisted session' + - Fore.LIGHTWHITE_EX + '"') + local_ip = get_private_ip() + ip_set = {local_ip} + ip_tags = [sessioninfo.IPTag(local_ip, "LOCAL IP")] + public_ip = get_public_ip() + if public_ip: + ip_set.add(public_ip) + ip_tags.append(sessioninfo.IPTag(public_ip, "PUBLIC IP")) + else: + print_white('Failed to get Public IP. Running without.') + + for ip, friend in custom_ips: + if friend.get('enabled'): + try: + ip_calc = IPValidator.validate_get(ip) + ip_set.add(ip_calc) + ip_tags.append(sessioninfo.IPTag(ip_calc, friend.get('name') + " [WHITELIST]")) + except ValidationError: + logger.warning('Not valid IP or URL: {}'.format(ip)) + print_white('Not valid IP or URL: "' + + Fore.LIGHTCYAN_EX + '{}'.format(ip) + + Fore.LIGHTWHITE_EX + '"') + continue + + for ip, friend in friends: + if friend.get('enabled'): + ip_set.add(ip) + ip_tags.append(sessioninfo.IPTag(ip, friend.get('name') + " [CLOUD]")) + + logger.info('Starting whitelisted session with {} IPs'.format(len(ip_set))) + print_white('Running: "' + + Fore.LIGHTCYAN_EX + 'Whitelisted session' + + Fore.LIGHTWHITE_EX + '" Press "' + + Fore.LIGHTCYAN_EX + 'CTRL + C' + + Fore.LIGHTWHITE_EX + '" to stop.') + + # Exposes session information, diagnostics and behaviour. + #manager = Manager() + #connection_stats = manager.list() + #session_info = sessioninfo.SessionInfo(manager.dict(), connection_stats, manager.Queue(), ip_tags) + + #logger.info("ip_tags: " + str(ip_tags)) + #logger.info("session_info: " + str(session_info)) + + """ Set up packet_filter outside the try-catch so it can be safely referenced inside KeyboardInterrupt.""" + packet_filter = Whitelist(ips=ip_set) + + print("Experimental support for Online 1.54+ developed by Speyedr.\n") - elif option == 'blacklist': - ip_set = set() - for ip, item in blacklist: - if item.get('enabled'): try: - ip = IPValidator.validate_get(item.get('ip')) - ip_set.add(ip) - except ValidationError: - logger.warning('Not valid IP or URL: {}'.format(ip)) - print_white('Not valid IP or URL: "' + - Fore.LIGHTCYAN_EX + '{}'.format(ip) + + #session_info.start() + packet_filter.start() + while True: + """ + Here is *probably* where the PacketLogger and SessionInfo classes should be managed. + Every [x] milliseconds the SessionInfo class will .update() with packet info (and a new print), + and the PacketLogger instance will be passed down to Whitelist() when initialized so the filter + loop can add packets to the capture. Once the session has stopped, the PacketLogger will add all + packets in its' memory queue to disk (or perhaps it should be sequentially writing to a file) and + save that file for investigation later. + """ + time.sleep(10) # this is still very terrible but might be good enough for now? + #input() + # if we reach here then the user pressed ENTER + #webbrowser.open("https://gitlab.com/Speyedr/guardian-fastload-fix/-/issues") + #time.sleep(1) # prevents the user from opening the page a ludicrous amount of times? + + #time.sleep(0.01) + #print(session_info) # display session diagnostics + #print(sessioninfo.generate_stats(connection_stats)) + #session_info.process_item() + #os.system('cls') # refresh console + except KeyboardInterrupt: + packet_filter.stop() + #session_info.stop() + logger.info('Stopped whitelisted session') + print_white('Stopped: "' + + Fore.LIGHTCYAN_EX + 'Whitelisted session' + Fore.LIGHTWHITE_EX + '"') - continue - logger.info('Starting blacklisted session with {} IPs'.format(len(ip_set))) - print_white('Running: "' + - Fore.LIGHTBLACK_EX + 'Blacklist' + - Fore.LIGHTWHITE_EX + '" Press "' + - Fore.LIGHTBLACK_EX + 'CTRL + C' + - Fore.LIGHTWHITE_EX + '" to stop.') - packet_filter = Whitelist(ips=ip_set) - try: - packet_filter.start() - while True: - time.sleep(10) # this is still very terrible - except KeyboardInterrupt: - packet_filter.stop() - logger.info('Stopped blacklisted session') - print_white('Stopped: "' + - Fore.LIGHTBLACK_EX + 'Blacklist' + - Fore.LIGHTWHITE_EX + '"') + elif option == 'blacklist': + print_white('BLACKLISTED SESSION:\n') + print('IP addresses in your Blacklist list\n' + 'will not be allowed to connect to you.\n\n' + 'If a connection is routed through R* servers,\n' + 'that connection will also be blocked\n' + 'as a security measure.\n\n' + 'This mode is NOT RECOMMENDED as GTA Online\n' + 'has custom routing if only a handful of\n' + 'IP addresses are blocked.\n') - elif option == 'auto_whitelist': - logger.info('Starting auto whitelisted session') - collector = IPCollector(packet_count_min_threshold=15) - logger.info('Starting to collect IPs') - collector.start() - for _ in tqdm(range(10), ascii=True, desc='Collecting session'): - time.sleep(0.5) - collector.stop() - ip_set = set(collector.ips) - logger.info('Collected {} IPs'.format(len(ip_set))) - #print("IPs: " + str(ip_set)) - print("Checking for potential tunnels in collected IPs...\n") - potential_tunnels = set() - for ip in ip_set: - if util.DynamicBlacklist.ip_in_cidr_block_set(ip, dynamic_blacklist, min_cidr_suffix=0): - if ip not in custom_ips: # Ignore if user has this IP in custom whitelist. - potential_tunnels.add(ip) - #print("potential tunnels: ", potential_tunnels) - if len(potential_tunnels) > 0: - c = [{ - 'name': ip, - 'checked': False - } for ip in potential_tunnels] - options = { - 'type': 'checkbox', - 'name': 'option', - 'qmark': '@', - 'message': "", 'WARNING! Guardian has detected ' + str(len(potential_tunnels)) + ' IP' + - ("" if len(potential_tunnels) == 1 else "s") + " in your current session that may be used for " + - "connection tunnelling, and may break session security if added to the whitelist.\nUnless " + - "you know what you're doing, " + - "it is HIGHLY RECOMMENDED that you DO NOT allow these IPs to be added to the whitelist.\n" + - "Please note that excluding an IP from this list will likely result in players connected " + - "through that IP to be dropped from the session.\nIf this happens, then you may have to " + - "check both you and your friend's Windows Firewall settings to see why they can't directly " + - "connect to you.\nIf this is a false-positive and you are sure an IP is a direct connection, " + - "you can prevent this message from appearing by manually adding them to the Custom whitelist.\n\n" + - "Select the potentially session security breaking IPs you wish to keep whitelisted, if any.\n" - 'choices': c - } - answer = prompt(options, style=style) - print(answer) - if answer is not None: - try: - for ip in answer['option']: - potential_tunnels.remove(ip) # Anything that has been checked will not be considered a tunnel. - except KeyError: - pass # Probably the user pressing CTRL+C to cancel the selection, meaning no 'option' key. - #print("potential tunnels:", potential_tunnels) + options = { + 'type': 'list', + 'name': 'option', + 'message': 'Do you want to start this type of session?', + 'qmark': '@', + 'choices': [ + { + 'name': 'Yes, start', + 'value': 'start' + }, + { + 'name': 'No, go back', + 'value': 'back' + } + ] + } - for ip in potential_tunnels: - ip_set.remove(ip) + answer = prompt(options, style=style, ) + if answer: + os.system('cls') + option = answer['option'] - #print("ip_set:", ip_set) + if option == 'start': - else: - print("No tunnels found!") - local_ip = get_private_ip() - ip_set.add(local_ip) - public_ip = get_public_ip() - if public_ip: - ip_set.add(public_ip) - else: - print_white('Failed to get Public IP. Running without.') + local_ip = get_private_ip() + allowed_ips = {local_ip} + public_ip = get_public_ip() + if public_ip: + allowed_ips.add(public_ip) + else: + print_white('Failed to get Public IP. Running without.') - for ip, friend in custom_ips: - if friend.get('enabled'): + for ip, friend in custom_ips: + if friend.get('enabled'): + try: + ip_calc = IPValidator.validate_get(ip) + allowed_ips.add(ip_calc) + except ValidationError: + logger.warning('Not valid IP or URL: {}'.format(ip)) + print_white('Not valid IP or URL: "' + + Fore.LIGHTCYAN_EX + '{}'.format(ip) + + Fore.LIGHTWHITE_EX + '"') + continue + + for ip, friend in friends: + if friend.get('enabled'): + allowed_ips.add(ip) + + ip_set = set() + for ip, item in blacklist: + if item.get('enabled'): + try: + ip = IPValidator.validate_get(ip) + ip_set.add(ip) + except ValidationError: + logger.warning('Not valid IP or URL: {}'.format(ip)) + print_white('Not valid IP or URL: "' + + Fore.LIGHTCYAN_EX + '{}'.format(ip) + + Fore.LIGHTWHITE_EX + '"') + continue + logger.info('Starting blacklisted session with {} IPs'.format(len(ip_set))) + print_white('Running: "' + + Fore.LIGHTBLACK_EX + 'Blacklist' + + Fore.LIGHTWHITE_EX + '" Press "' + + Fore.LIGHTBLACK_EX + 'CTRL + C' + + Fore.LIGHTWHITE_EX + '" to stop.') + + packet_filter = Blacklist(ips=ip_set, blocks=dynamic_blacklist, known_allowed=allowed_ips) try: - ip_calc = IPValidator.validate_get(ip) - ip_set.add(ip_calc) - except ValidationError: - logger.warning('Not valid IP or URL: {}'.format(ip)) - print_white('Not valid IP or URL: "' + - Fore.LIGHTCYAN_EX + '{}'.format(ip) + + packet_filter.start() + while True: + time.sleep(10) # this is still very terrible + except KeyboardInterrupt: + packet_filter.stop() + logger.info('Stopped blacklisted session') + print_white('Stopped: "' + + Fore.LIGHTBLACK_EX + 'Blacklist' + Fore.LIGHTWHITE_EX + '"') - continue - for ip, friend in friends: - if friend.get('enabled'): - ip_set.add(ip) + elif option == 'auto_whitelist': + print_white('AUTO WHITELISTED SESSION:\n') + print('Same as a Whitelisted session, except\n' + 'everybody currently in the session is\n' + 'temporarily added to the whitelist,\n' + 'which prevents them from being kicked.\n\n' + 'Any automatically collected IPs will be\n' + 'lost once the session ends.\n\n' + 'If Guardian detects that a player in your\n' + 'session is being routed through R* servers,\n' + 'you will be warned whether you wish to add\n' + 'this IP to the temporary whitelist.\n\n' + 'If you do decide to allow those IPs,\n' + 'your session may not properly protected.\n') - time.sleep(5) # to see debug prints + options = { + 'type': 'list', + 'name': 'option', + 'message': 'Do you want to start this type of session?', + 'qmark': '@', + 'choices': [ + { + 'name': 'Yes, start', + 'value': 'start' + }, + { + 'name': 'No, go back', + 'value': 'back' + } + ] + } - os.system('cls') - logger.info('Starting whitelisted session with {} IPs'.format(len(ip_set))) - print_white('Running: "' + - Fore.LIGHTCYAN_EX + 'Whitelisted session' + - Fore.LIGHTWHITE_EX + '" Press "' + - Fore.LIGHTCYAN_EX + 'CTRL + C' + - Fore.LIGHTWHITE_EX + '" to stop.') + answer = prompt(options, style=style, ) + if answer: + os.system('cls') + option = answer['option'] + + if option == 'start': + + logger.info('Starting auto whitelisted session') + collector = IPCollector(packet_count_min_threshold=15) + logger.info('Starting to collect IPs') + collector.start() + for _ in tqdm(range(10), ascii=True, desc='Collecting session'): + time.sleep(0.5) + collector.stop() + ip_set = set(collector.ips) + logger.info('Collected {} IPs'.format(len(ip_set))) + #print("IPs: " + str(ip_set)) + print("Checking for potential tunnels in collected IPs...\n") + potential_tunnels = set() + for ip in ip_set: + if util.DynamicBlacklist.ip_in_cidr_block_set(ip, dynamic_blacklist, min_cidr_suffix=0): + if ip not in custom_ips: # Ignore if user has this IP in custom whitelist. + potential_tunnels.add(ip) + #print("potential tunnels: ", potential_tunnels) + if len(potential_tunnels) > 0: + c = [{ + 'name': ip, + 'checked': False + } for ip in potential_tunnels] + options = { + 'type': 'checkbox', + 'name': 'option', + 'qmark': '@', + 'message': "", 'WARNING! Guardian has detected ' + str(len(potential_tunnels)) + ' IP' + + ("" if len(potential_tunnels) == 1 else "s") + " in your current session that may be used for " + + "connection tunnelling, and may break session security if added to the whitelist.\nUnless " + + "you know what you're doing, " + + "it is HIGHLY RECOMMENDED that you DO NOT allow these IPs to be added to the whitelist.\n" + + "Please note that excluding an IP from this list will likely result in players connected " + + "through that IP to be dropped from the session.\nIf this happens, then you may have to " + + "check both you and your friend's Windows Firewall settings to see why they can't directly " + + "connect to you.\nIf this is a false-positive and you are sure an IP is a direct connection, " + + "you can prevent this message from appearing by manually adding them to the Custom whitelist.\n\n" + + "Select the potentially session security breaking IPs you wish to keep whitelisted, if any.\n" + 'choices': c + } + answer = prompt(options, style=style) + print(answer) + if answer is not None: + try: + for ip in answer['option']: + potential_tunnels.remove(ip) # Anything that has been checked will not be considered a tunnel. + except KeyError: + pass # Probably the user pressing CTRL+C to cancel the selection, meaning no 'option' key. + #print("potential tunnels:", potential_tunnels) - packet_filter = Whitelist(ips=ip_set) - try: - packet_filter.start() - while True: - time.sleep(10) # this is still very terrible - except KeyboardInterrupt: - packet_filter.stop() - logger.info('Stopping whitelisted session') - print_white('Stopped: "' + - Fore.LIGHTCYAN_EX + 'Whitelisted session' + - Fore.LIGHTWHITE_EX + '"') + for ip in potential_tunnels: + ip_set.remove(ip) - elif option == "lock_session": - os.system('cls') - logger.info('Session will now lock. All requests to join this session should fail.') - print_white('Running: "' + - Fore.LIGHTCYAN_EX + 'Locked session' + - Fore.LIGHTWHITE_EX + '" Press "' + - Fore.LIGHTCYAN_EX + 'CTRL + C' + - Fore.LIGHTWHITE_EX + '" to unlock session.') + #print("ip_set:", ip_set) - packet_filter = Locked() - try: - packet_filter.start() - while True: - time.sleep(10) # this is still very terrible - except KeyboardInterrupt: - packet_filter.stop() - logger.info('Stopping whitelisted session') - print_white('Stopped: "' + - Fore.LIGHTCYAN_EX + 'Locked session' + - Fore.LIGHTWHITE_EX + '"') - - elif option == "lock_whitelist": - local_ip = get_private_ip() - ip_set = {local_ip} - public_ip = get_public_ip() - if public_ip: - ip_set.add(public_ip) - else: - print_white('Failed to get Public IP. Running without.') + else: + print("No tunnels found!") + local_ip = get_private_ip() + ip_set.add(local_ip) + public_ip = get_public_ip() + if public_ip: + ip_set.add(public_ip) + else: + print_white('Failed to get Public IP. Running without.') - for ip, friend in custom_ips: - if friend.get('enabled'): + for ip, friend in custom_ips: + if friend.get('enabled'): + try: + ip_calc = IPValidator.validate_get(ip) + ip_set.add(ip_calc) + except ValidationError: + logger.warning('Not valid IP or URL: {}'.format(ip)) + print_white('Not valid IP or URL: "' + + Fore.LIGHTCYAN_EX + '{}'.format(ip) + + Fore.LIGHTWHITE_EX + '"') + continue + + for ip, friend in friends: + if friend.get('enabled'): + ip_set.add(ip) + + time.sleep(5) # to see debug prints + + os.system('cls') + logger.info('Starting whitelisted session with {} IPs'.format(len(ip_set))) + print_white('Running: "' + + Fore.LIGHTCYAN_EX + 'Whitelisted session' + + Fore.LIGHTWHITE_EX + '" Press "' + + Fore.LIGHTCYAN_EX + 'CTRL + C' + + Fore.LIGHTWHITE_EX + '" to stop.') + + packet_filter = Whitelist(ips=ip_set) try: - ip_calc = IPValidator.validate_get(ip) - ip_set.add(ip_calc) - except ValidationError: - logger.warning('Not valid IP or URL: {}'.format(ip)) - print_white('Not valid IP or URL: "' + - Fore.LIGHTCYAN_EX + '{}'.format(ip) + + packet_filter.start() + while True: + time.sleep(10) # this is still very terrible + except KeyboardInterrupt: + packet_filter.stop() + logger.info('Stopping whitelisted session') + print_white('Stopped: "' + + Fore.LIGHTCYAN_EX + 'Whitelisted session' + Fore.LIGHTWHITE_EX + '"') - continue - for ip, friend in friends: - if friend.get('enabled'): - ip_set.add(ip) + elif option == "lock_session": + print_white('LOCKED SESSION:\n') + print('This mode blocks all join requests,\n' + 'preventing new players from entering\n' + 'the session.\n\n' + 'Anyone already in the session remains,\n' + 'and this mode prevents people from entering\n' + 'the session through R* servers if someone\n' + 'is being tunnelled through a R* IP.\n\n' + 'However, if someone leaves the session\n' + 'they will not be able to get back in\n' + 'unless you end the Locked session.\n') - os.system('cls') - logger.info('Starting locked session with {} IP overrides'.format(len(ip_set))) - print_white('Running: "' + - Fore.LIGHTCYAN_EX + 'Locked session w/ Whitelist override' + - Fore.LIGHTWHITE_EX + '" Press "' + - Fore.LIGHTCYAN_EX + 'CTRL + C' + - Fore.LIGHTWHITE_EX + '" to stop.') - # TODO: There's a formatting fail here and in at least one other session type. - # I have a feeling I'll eventually refactor Guardian enough to hit v4. - - packet_filter = LockedWhitelist(ips=ip_set) - try: - packet_filter.start() - while True: - time.sleep(10) # this is still very terrible - except KeyboardInterrupt: - packet_filter.stop() - logger.info('Stopping locked session w/ whitelist override') - print_white('Stopped: "' + - Fore.LIGHTCYAN_EX + 'Locked session w/ Whitelist override' + - Fore.LIGHTWHITE_EX + '"') + options = { + 'type': 'list', + 'name': 'option', + 'message': 'Do you want to start this type of session?', + 'qmark': '@', + 'choices': [ + { + 'name': 'Yes, start', + 'value': 'start' + }, + { + 'name': 'No, go back', + 'value': 'back' + } + ] + } + + answer = prompt(options, style=style, ) + if answer: + os.system('cls') + option = answer['option'] + + if option == 'start': + + os.system('cls') + logger.info('Session will now lock. All requests to join this session should fail.') + print_white('Running: "' + + Fore.LIGHTCYAN_EX + 'Locked session' + + Fore.LIGHTWHITE_EX + '" Press "' + + Fore.LIGHTCYAN_EX + 'CTRL + C' + + Fore.LIGHTWHITE_EX + '" to unlock session.') + + packet_filter = Locked() + try: + packet_filter.start() + while True: + time.sleep(10) # this is still very terrible + except KeyboardInterrupt: + packet_filter.stop() + logger.info('Stopping whitelisted session') + print_white('Stopped: "' + + Fore.LIGHTCYAN_EX + 'Locked session' + + Fore.LIGHTWHITE_EX + '"') elif option == 'lists': while True: @@ -1042,8 +1244,8 @@ def main(): # My perms os.system('cls') while True: - allowed = cloud.get_allowed() - if len(allowed) <= 0: + allowed_ips = cloud.get_allowed() + if len(allowed_ips) <= 0: print_white('None') break options = { @@ -1051,7 +1253,7 @@ def main(): 'name': 'option', 'qmark': '@', 'message': 'Who to revoke', - 'choices': [f.get('name') for f in allowed] + 'choices': [f.get('name') for f in allowed_ips] } answer = prompt(options, style=style) if not answer: @@ -1244,6 +1446,11 @@ def main(): Fore.LIGHTCYAN_EX + answer['token'] + Fore.LIGHTWHITE_EX + '"') + elif option == 'discord': + os.system('cls') + print_white('Opening Discord URL in your default browser...') + webbrowser.open("https://discord.gg/6FzKCh4j4v") + elif option == 'support_zip': os.system('cls') print_white('NOTICE: This program will now log all udp traffic on port 6672 for 1 minute. ' @@ -1336,67 +1543,108 @@ def main(): if __name__ == '__main__': freeze_support() - config = data.ConfigData(data.file_name) - try: - blacklist = data.CustomList('blacklist') - custom_ips = data.CustomList('custom_ips') - friends = data.CustomList('friends') - except data.MigrationRequired: - data.migrate_to_dict() - time.sleep(5) - sys.exit() - - os.system('cls') - logger.info('Init') - if not ctypes.windll.shell32.IsUserAnAdmin(): - print_white('Please start as administrator') - logger.info('Started without admin') - input('Press enter to exit.') - sys.exit() - logger.info('Booting up') - print_white('Booting up...') - if not pydivert.WinDivert.is_registered(): - pydivert.WinDivert.register() - ctypes.windll.kernel32.SetConsoleTitleW('Guardian {}'.format(version)) - cloud = networkmanager.Cloud() - ipsyncer = IPSyncer(None) - print_white('Building dynamic blacklist...') - dynamic_blacklist = set() + try: - dynamic_blacklist = util.DynamicBlacklist.get_dynamic_blacklist() - except (util.DynamicBlacklist.ScrapeError, RequestException, json.decoder.JSONDecodeError, IndexError, ValueError, TypeError, KeyError) as e: - print_white('ERROR: Could not construct dynamic blacklist: ' + str(e) + - '\nAuto-Whitelist will not work correctly.') - time.sleep(3) - print_white('Checking connections.') - if cloud.check_connection(): - version = cloud.version() - version = version.get('version', None) if version else None - if version: - if StrictVersion(version) > StrictVersion(version): - os.system('cls') - print_white('An update was found.') - options = { - 'type': 'confirm', - 'message': 'Open browser?', - 'name': 'option', - 'qmark': '@', - 'default': True - } - answer = prompt(options, style=style) - if answer['option']: - webbrowser.open('https://www.thedigitalarc.com/software/Guardian') - token = config.get('token') - if token: - cloud.token = token - if cloud.check_token(): - ipsyncer.token = token - ipsyncer.start() - print_white('Starting IP syncer.') + success = False + while not success: + try: + config = data.ConfigData(data.file_name) + success = True # if we reach here then config was parsed successfully + except Exception as e: + # config file could not be loaded. either file creation failed or data.json is corrupt. + if not os.path.isfile(data.file_name): + # could not create config. fatal error. MB_OK is 0x0, MB_ICON_ERROR is 0x10 + ctypes.windll.user32.MessageBoxW(None, f"FATAL: Guardian could not create the config file {data.file_name}.\n\n" + f"Press 'Ok' to close the program.", + f"Fatal Error", 0x0 | 0x10) + raise e # could call sys.exit instead but I think raising again is more sensible + else: + # MB_ABORTRETRYIGNORE is 0x2, MB_ICON_ERROR is 0x10 + choice = ctypes.windll.user32.MessageBoxW(None, f"Guardian could not load the config file {data.file_name}.\n\n" + f"The most common reason for this error is that the file is corrupt.\n\n" + f"Press 'Abort' to close Guardian, press 'Retry' to load the config again, " + f"or press 'Ignore' to \"Refresh\" Guardian by renaming the corrupt " + f"config file and creating a new one.", + f"Error", 0x2 | 0x10) + # ID_ABORT = 0x3, ID_RETRY = 0x4, ID_IGNORE = 0x5 + if choice == 0x3: + sys.exit(-2) + elif choice == 0x4: + pass # we'll hit the bottom of the loop and try again + else: + separator = data.file_name.rindex('.') + new_name = data.file_name[:separator] + '_' + str(hex(int(time.time_ns())))[2:] + data.file_name[separator:] + os.rename(data.file_name, new_name) + # file has been renamed, try again + + # at this point the file has been parsed and is valid--any additional exceptions are explicit or programmer error + try: + blacklist = data.CustomList('blacklist') + custom_ips = data.CustomList('custom_ips') + friends = data.CustomList('friends') + except data.MigrationRequired: + data.migrate_to_dict() + time.sleep(5) + sys.exit() + + os.system('cls') + logger.info('Init') + if not ctypes.windll.shell32.IsUserAnAdmin(): + print_white('Please start as administrator') + logger.info('Started without admin') + input('Press enter to exit.') + sys.exit() + logger.info('Booting up') + print_white('Booting up...') + if not pydivert.WinDivert.is_registered(): + pydivert.WinDivert.register() + ctypes.windll.kernel32.SetConsoleTitleW('Guardian {}'.format(version)) + cloud = networkmanager.Cloud() + ipsyncer = IPSyncer(None) + print_white('Building dynamic blacklist...') + dynamic_blacklist = set() + try: + dynamic_blacklist = util.DynamicBlacklist.get_dynamic_blacklist("db.json") + except (util.DynamicBlacklist.ScrapeError, RequestException, json.decoder.JSONDecodeError, IndexError, ValueError, TypeError, KeyError, FileNotFoundError) as e: + print_white('ERROR: Could not construct dynamic blacklist: ' + str(e) + + '\nAuto-Whitelist and Blacklist will not work correctly.') + time.sleep(3) + print_white('Checking connections.') + if cloud.check_connection(): + version = cloud.version() + version = version.get('version', None) if version else None + if version: + if StrictVersion(version) > StrictVersion(version): + os.system('cls') + print_white('An update was found.') + options = { + 'type': 'confirm', + 'message': 'Open browser?', + 'name': 'option', + 'qmark': '@', + 'default': True + } + answer = prompt(options, style=style) + if answer['option']: + webbrowser.open('https://www.thedigitalarc.com/software/Guardian') + token = config.get('token') + if token: + cloud.token = token + if cloud.check_token(): + ipsyncer.token = token + ipsyncer.start() + print_white('Starting IP syncer.') + except Exception as e: + crash_report(e, "Guardian crashed before reaching main()") + raise + while True: try: main() except KeyboardInterrupt: continue + except Exception as e: + crash_report(e, "Guardian crashed in main()") + raise # still crash the program because it's not recoverable finally: ipsyncer.stop() diff --git a/network/blocker.py b/network/blocker.py index 4efbb5e..1624cd7 100644 --- a/network/blocker.py +++ b/network/blocker.py @@ -1,11 +1,14 @@ import multiprocessing +from abc import abstractmethod + import pydivert import re import logging import data -from network import networkmanager +from network import networkmanager, sessioninfo from app import IPValidator from questionary import ValidationError +from util.DynamicBlacklist import ip_in_cidr_block_set debug_logger = logging.getLogger('debugger') debug_logger.setLevel(logging.DEBUG) @@ -92,18 +95,140 @@ """ -class Whitelist(object): +class AbstractPacketFilter: + + def __init__(self, ips, session_info=None, debug=False): + self.ips = ips + self.process = multiprocessing.Process(target=self.run, args=()) + self.process.daemon = True + self.session_info = session_info # If no session info object was passed then it won't be used. + self.debug_print_decisions = debug + + def start(self): + self.process.start() + logger.info('Dispatched ' + self.__class__.__name__ + ' blocker process') + + def stop(self): + self.process.terminate() + logger.info('Terminated ' + self.__class__.__name__ + ' blocker process') + + @abstractmethod + def is_packet_allowed(self, packet): + pass + + def run(self): + print("ips: " + str(self.ips)) + + if not pydivert.WinDivert.is_registered(): + pydivert.WinDivert.register() + try: + with pydivert.WinDivert(packetfilter) as w: + for packet in w: + decision = self.is_packet_allowed(packet) + if decision: + w.send(packet) + + if self.session_info is not None: + self.session_info.add_packet(sessioninfo.safe_pickle_packet(packet), allowed=decision) + + if self.debug_print_decisions: + print(self.construct_debug_packet_info(packet, decision)) + + except KeyboardInterrupt: + """ This never hits, but the override is still necessary to stop the program from quitting on CTRL + C. """ + pass + + @staticmethod + def construct_debug_packet_info(packet, decision=None): + prefix = "" if decision is None else ("ALLOWING" if decision else "DROPPING") + + return prefix + \ + " PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str(len(packet.payload)) + + +class Whitelist(AbstractPacketFilter): """ Packet filter that will allow packets from with source ip present on ips list """ - def __init__(self, ips): + def __init__(self, ips, session_info=None, debug=False): + super().__init__(ips, session_info, debug) + + def is_packet_allowed(self, packet): + ip = packet.ip.src_addr + size = len(packet.payload) + + """ The "special sauce" for the new filtering logic. We're using payload sizes to guess if the packet + has a behaviour we want to allow through. """ + if (ip in self.ips) or (size in heartbeat_sizes) or (size in matchmaking_sizes): + return True + + +class Blacklist(AbstractPacketFilter): + + def __init__(self, ips, blocks=None, known_allowed=None, session_info=None, debug=False): + super().__init__(ips, session_info, debug) + + if blocks is None: + blocks = set() + if known_allowed is None: + known_allowed = set() + + self.ip_blocks = blocks # set of CIDR blocks + self.known_allowed = known_allowed # IPs which are known to not be in blocks + + def is_packet_allowed(self, packet): + ip = packet.ip.src_addr + size = len(packet.payload) + + """ Somewhat ironically we still use whitelisting logic. """ + if (ip in self.known_allowed) or (size in matchmaking_sizes) or (size in heartbeat_sizes): + return True + + elif ip not in self.ips: + # If it's not directly blacklisted it might be in a blacklisted range + if ip_in_cidr_block_set(ip, self.ip_blocks): + self.ips.add(ip) # It was in a blacklisted range, add this to the standard list + return False + + else: + self.known_allowed.add(ip) # If not then it's definitely allowed, remember this for next time + return True + + else: + return False + + +class Locked(AbstractPacketFilter): + + def __init__(self, session_info=None, debug=False): + super().__init__(set(), session_info, debug) + + def is_packet_allowed(self, packet): + size = len(packet.payload) + + """ No new matchmaking requests allowed. + Seems a bit overkill (and perhaps reckless) to always block these payload sizes but my packet + captures show that these payload sizes don't occur in any regular game traffic so... """ + if size in matchmaking_sizes: + return False + + return True + + +class WhitelistOld(object): + """ + Packet filter that will allow packets from with source ip present on ips list + """ + + def __init__(self, ips, session_info=None): """ :param list ips: """ self.ips = ips self.process = multiprocessing.Process(target=self.run, args=()) self.process.daemon = True + self.session_info = session_info # If no session info object was passed then it won't be used. def start(self): self.process.start() @@ -115,6 +240,8 @@ def stop(self): def run(self): + #self.session_info.start() + print("ips: " + str(self.ips)) if not pydivert.WinDivert.is_registered(): pydivert.WinDivert.register() @@ -123,6 +250,7 @@ def run(self): for packet in w: ip = packet.ip.src_addr size = len(packet.payload) # the size of the payload. used to guess packet's behaviour / "intent" + #print(packet) """ The "special sauce" for the new filtering logic. We're using payload sizes to guess if the packet @@ -130,27 +258,39 @@ def run(self): """ if (ip in self.ips) or (size in heartbeat_sizes) or (size in matchmaking_sizes): w.send(packet) + if self.session_info is not None: + self.session_info.add_packet(sessioninfo.safe_pickle_packet(packet), allowed=True) #print("ALLOWING PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str(len(packet.payload))) else: #print("DROPPING PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str(len(packet.payload))) pass # drop the packet because it didn't match our filter. + if self.session_info is not None: + self.session_info.add_packet(sessioninfo.safe_pickle_packet(packet), allowed=False) except KeyboardInterrupt: """ This never hits, but the override is still necessary to stop the program from quitting on CTRL + C. """ + #self.session_info.stop() pass -class Blacklist(object): +class BlacklistOld(object): """ Packet filter that will block packets from with source ip present on ips list """ - def __init__(self, ips): + def __init__(self, ips, blocks=None, known_allowed=None): """ - :param list ips: + :param set ips: """ + if blocks is None: + blocks = set() + if known_allowed is None: + known_allowed = set() + self.ips = ips + self.ip_blocks = blocks # set of CIDR blocks + self.known_allowed = known_allowed # IPs which are known to not be in blocks self.process = multiprocessing.Process(target=self.run, args=()) self.process.daemon = True @@ -178,10 +318,30 @@ def run(self): NOTE: This probably isn't a complete list of R* tunnels. Ideally, ipfilter should contain all possible ranges of inbound (and maybe even outbound?) tunnels. """ - if (ip in self.ips or ipfilter.match(ip)) and not ((size in matchmaking_sizes) or size in heartbeat_sizes): - pass # drop the packet because it's not allowed. - else: + if (ip in self.known_allowed) or (size in matchmaking_sizes) or (size in heartbeat_sizes): w.send(packet) + print("ALLOWING PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str(len(packet.payload))) + + elif ip not in self.ips: + # If it's not directly blacklisted it might be in a blacklisted range + if ip_in_cidr_block_set(ip, self.ip_blocks): + self.ips.add(ip) # It was in a blacklisted range, add this to the standard list + print( + "DROPPING PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str( + len(packet.payload))) + else: + self.known_allowed.add(ip) # If not then it's definitely allowed, remember this for next time + w.send(packet) + print( + "ALLOWING PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str( + len(packet.payload))) + + else: + pass # was in the blacklist + print( + "DROPPING PACKET FROM " + packet.src_addr + ":" + str(packet.src_port) + " Len:" + str( + len(packet.payload))) + except KeyboardInterrupt: pass @@ -189,7 +349,7 @@ def run(self): # unnecessarily duplicate code here. -class Locked(object): +class LockedOld(object): """ Packet filter to block any new requests to join the session. """ diff --git a/network/networkmanager.py b/network/networkmanager.py index 81364b6..bdf6ec5 100644 --- a/network/networkmanager.py +++ b/network/networkmanager.py @@ -16,15 +16,18 @@ def _send_request(self, method, endpoint, params=None, payload=None, **kwargs): 'Content-Type': 'application/json; charset=UTF-8', 'Authorization': self.token if self.token else None} for _ in range(3): - resp = s.request(method=method, url=url, params=params, json=payload, headers=headers, **kwargs) - if resp.status_code != 502: # Retry on error 502 "Bad Gateway" - break + try: + resp = s.request(method=method, url=url, params=params, json=payload, headers=headers, **kwargs) + if resp.status_code != 502: # Retry on error 502 "Bad Gateway" + break + except requests.exceptions.RequestException as e: + raise ConnectionError if resp.status_code >= 400: raise ConnectionError try: - resp_text = resp.json(encoding="utf-8") + resp_text = resp.json() except ValueError: resp_text = resp.text diff --git a/network/packetlogger.py b/network/packetlogger.py new file mode 100755 index 0000000..e69de29 diff --git a/network/sessioninfo.py b/network/sessioninfo.py new file mode 100755 index 0000000..577edfd --- /dev/null +++ b/network/sessioninfo.py @@ -0,0 +1,446 @@ +from multiprocessing import Manager, Process +import os +import time + + +""" +Ok so now that we've finally figured out most of the bugs / problems with pickling packets we can now actually start +to curate information from packets (and perhaps even other metrics) that can be displayed. I have a couple ideas: + +Filter Processing Time: The amount of time it took to process the previous packet. +Average Filter Processing Time: A cycling queue of 50 or 100 numbers, all containing the amount of time in seconds it + took to filter each packet. Would give a good idea on the additional latency introduced by the filter. + +Last IPC Overhead: The amount of time it took to pickle / pipe information across to the diagnostic process. +Average IPC Overhead: Cycling queue of 50 or 100 numbers, same ideology as Average Filter Processing Time. + +Current Filter Load: The amount of "off-time" between filtering two different packets. I believe the calculation would + be (filter_processing_time) / (filter_processing_time + filter_off_time). If filter_off_time is big, the filter is + not under load and the resulting value is small. If filter_off_time is small, the filter is loaded and the resulting + value will be much closer to 1 (or 100%). +Average Filter Load: Same as Current Filter Load, but the last 50 or 100 calculations. + +Per IP: + Packets Received: Pretty obvious. + Bytes Received: Pretty obvious. + Packets Received per second: Pretty obvious. Could this metric be used as a last line of defence on client tunnels? + Bytes Received per second: Pretty obvious. + Last Seen / Last Packet Recv'd: Pretty obvious. + Packets Dropped: Pretty obvious. + Packets Allowed: Pretty obvious. + +Tags: Miscellaneous information about an IP. All in text format. + VPN / Residential / TOR: What "kind" of IP this is. + R* OFFICIAL / R* SERVICES: If this is used by R* for their services. + APPROXIMATE LOCATION: Nothing too descriptive, just the continent / nation. We don't want dox'ing. + MODDER / STALKER / SCRAPER: There's a chance that we can tag stalkers based on network behaviour. + WHITELISTED [TAG]: This IP is whitelisted. + BLACKLISTED [TAG]: This IP is blacklisted. + FRIEND [TAG]: This IP belongs to a cloud-based friend. + UNKNOWN: This IP has been seen but it's behaviour is unknown. + CONNECTED / IN SESSION: This IP is currently in the session. + +Overall: + Packets Received: + Packets Dropped: + Packets Allowed: + Bytes Received: + Bytes Dropped: + Bytes Allowed: + +Meta: Relating to the processing of Diagnostics. + Diagnostics Queue Size: How many packets are pending processing. + Average Processing Time: Average of how long it took to process the last 50 / 100 packets. + Print Overhead: How long it's taking to display content on the screen. + Average Print Overhead: Same logic as all the other averaging methods. + + ================================ + IP | Packets IN | Packets IN/s | Bytes IN | Bytes IN/s | Packets OUT | Packets OUT/s | Bytes OUT | Bytes OUT/s | Last Seen | # Pckts Allowed | # Pckts Dropped | Tags and Info + 192.168.0.235 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | NEVER | 0 | 0 | LOCAL IP + 172.68.2.143 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | NEVER | 0 | 0 | PUBLIC IP + 255.255.255.255 | 24 | 0.0 | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 | 10s | 24 | 0 | R* SERVICES + 85.42.1.15 | | UNKNOWN (CRP, IT) + 101.172.93.149 | | UNKNOWN (RES, AU) + 1.145.210.255 | | UNKNOWN (MBL, AU) + 66.176.75.199 | | 1x JOIN REQ. (RES, US) + | | 1x REQ. 1x CNFM. + | | 2x JOIN CNFM. + | | CONNECTED [TessioMT] + | | WHITELISTED [RDS128] + | | BLACKLISTED [Example] + TOTAL | | RUNNING FOR: 12 MINUTES + + Avg. Filter Load: 23% Avg. FPT: 0.39 ms Avg. IPC: 2.31 ms Print Time: 23 ms + Cur. Filter Load: <1% Last FPT: 0.12 ms Last IPC: 22.9 ms Queue Size: 2 + ================================ + +Tag Priotity: This is the order of precedence for the info tags. Lower is more important. + UNKNOWN + WHITELISTED + BLACKLISTED + xx JOIN REQ. + xx REQ. xx CNFM. + CONNECTED + R* SERVICES + LOCAL IP + PUBLIC IP + +Can we also add coloured rows to the list? Would be pretty pog to see rows turn green when a packet was accepted, red when a packet was rejected, etc. + + """ + +class MinimalPacket: + + def __init__(self, packet): + #self.ip = packet.ip + #self.ip.src_addr = packet.ip.src_addr + + #self.ip.raw.release() + #self.ip.raw = bytes(self.ip.raw) + #self.payload = bytes(packet.raw) + self.payload_length = len(packet.payload) + self.src_addr = packet.src_addr + self.src_port = packet.src_port + self.dst_addr = packet.dst_addr + self.dst_port = packet.dst_port + self.is_inbound = packet.is_inbound + self.is_outbound = packet.is_outbound + self.direction = packet.direction + +def safe_pickle_packet(packet): + """ + Returns a variant of a PyDivert packet that: + a) can be pickled (typical PyDivert packets use MemoryView which cannot be pickled) + b) has had certain untrusted, external information redacted (code execution can occur when unpickling, i.e. + certain externally-controllable characteristics like packet content should be removed) + """ + """ + Delete the raw payload content. We don't need it (it's encrypted anyways) and modded clients can send raw bytes + to other clients, including us, which could allow arbitrary code execution to occur when unpickling packet objects. + (See https://docs.python.org/3/library/pickle.html) + + TODO: Investigate performance of serialization with JSON instead of pickling to improve program security. + """ + #packet.raw.release() + #packet.raw = None + #packet.raw = packet.raw.tobytes() # convert to finite array + + # packet.ipv4.raw and packet.udp.raw reference the *exact* same buffer as packet.raw so they also get "released". + #packet.ipv4.raw.release() + #packet.udp.raw.release() + + min_packet = MinimalPacket(packet) + + #print(min_packet) + + return min_packet + + +def generate_stats(connection_stats): + """ +Given a list containing connection statistics, generates a human-readable representation of those statistics. + This function was originally the override for __str__ (so you could just call print(session_info)) but it appears a lot + of my assumptions about programming design need to go out the window when writing multi-processing programs. + """ + #print("connection_stats: ", connection_stats) + #print("len(connection_stats): ", len(connection_stats)) + str_gen = [] # A partially generated string. Concatenating strings in python using '+' is sub-optimal; O(n^2) + #get = self.connection_stats + for con_stat in connection_stats: + info = con_stat.get_info() + # TODO: Would an implementation of list that returns itself (to allow recursive .append() calls) + # instead of None (which is why we have so many lines) be useful? + str_gen.append("IP: ") + str_gen.append(info['ip']) + str_gen.append("\t | Packets IN: ") + str_gen.append(str(info['packets_in'])) + str_gen.append("\t | Packets OUT: ") + str_gen.append(str(info['packets_out'])) + str_gen.append("\t | Last Seen: ") + str_gen.append(str(info['last_seen'])) + str_gen.append("\t | # Allowed: ") + str_gen.append(str(info['packets_allowed'])) + str_gen.append("\t | # Dropped: ") + str_gen.append(str(info['packets_dropped'])) + str_gen.append("\t | Tag: ") + str_gen.append(info['tag']) + str_gen.append("\n") + + # Once this loop is complete, the *actual* string object can be built. + return "".join(str_gen) + + +class SessionInfo: + + """ + Returns human-readable strings that expose session information to the user after being supplied captured packets. + + proxy_dict: A proxy to a dictionary (for known_ips) + proxy_list: A proxy to a list (for connection_stats) + proxy_queue: A proxy to a Queue (for packet_queue) + Proxies must be passed down from the parent (and will also be shared elsewhere so they can be modified). + + initial_ips: Array of IPTag that should be known before any traffic is received from those IPs. + + known_ips: Dictionary of known IPs, used to check if an IP has been seen previously. + Value stored is the index into an array of ConnectionStats. + connection_stats: Array of ConnectionStats, which contain the calculations and statistics of connections. (duh) + """ + def __init__(self, proxy_dict, proxy_list, proxy_queue, initial_ips=None): + if initial_ips is None: + initial_ips = [] + + self.known_ips = proxy_dict + # self.connection_stats = [ConnectionStats(IPTag("1.1.1.1", "test"))] + self.connection_stats = proxy_list + + for ip_tag in initial_ips: + #print("ip_tag: " + str(ip_tag)) + self.add_con_stat_from_ip_tag(ip_tag) + # Connection stats and known IPs are now initialised. + + #print(self.known_ips) + #print(self.connection_stats) + + i = 0 + while i < len(self.connection_stats): + #print("ATTEMPTING TO ACCESS connection_stats[" + str(i) + "]") + con_stat = self.connection_stats[i] + #print("ACCESS SUCCESSFUL") + #print(con_stat) + i += 1 + + #print("ATTEMPTING FOR LOOP") + #for con_stat in self.connection_stats: + #print(con_stat) + + """ + This is a queue of packets pending processing. I wanted to make adding packets to SessionInfo objects as light + as possible because packets will come from the filtering thread and if not designed properly, would "block" the + entire filter for as long as it took for SessionInfo to process that packet. This could lead to in-game + latency at best, and possibly a program crash at worst (because the filter cannot process packets quickly + enough and lead to memory exhaustion). + + So, "adding" a packet actually only puts it in this queue, and a different process will do the depletion (and + of course, processing) of packets in this queue. + """ + self.packet_queue = proxy_queue + + #self.processing_thread = Process(target=self.run, args=()) + #self.processing_thread.daemon = True # Terminate this thread if the parent gets terminated. + + #def start(self): + #self.processing_thread.start() + + #def stop(self): + #self.processing_thread.terminate() + + """ + A packet was received by the filter and is now being shared with SessionInfo. + + packet: The packet (as received by PyDivert) + allowed: Whether the packet was allowed (true) or dropped (false). + """ + def add_packet(self, packet, allowed): + """ + We cannot waste any time waiting for a spot in the queue. This function is called in the context of the + filtering thread and so processing will happen later (and almost certainly on a different thread). + """ + #print(packet) + self.packet_queue.put((packet, allowed), block=False) + + #def run(self): + """ + Continually (and indefinitely) process the packet queue. Obviously this should be run in its' own thread. + """ + #while True: + #self.process_item() + #os.system('cls') # clear the console for new print + #print(self) # When new packet received, update display. + # Might be a good idea to add some sort of sleep here? + + def process_item(self, block=True): + """ + Depletes the queue of a single packet that has been added from the filtering thread. + Note that by default, whatever thread calls this method *will be blocked* until there is an item in the queue. + If you don't want your thread blocked, you will need to handle Empty exceptions because the queue *will* be + empty at some points during processing. + """ + (packet, allowed) = self.packet_queue.get(block) # If there is a packet in the queue, get it (or wait for one) + self.process_packet(packet, allowed) # Actually process the packet. + return # If you want to process another packet, you'll need to call this function again. + + def process_packet(self, packet, allowed): + ip = packet.src_addr if packet.is_inbound else packet.dst_addr + #print("KNOWN IPS (process_packet): ", self.known_ips) + + # If we're not aware of this destination, a new ConnectionStat (and conseq. IPTag) is required. + if ip not in self.known_ips: + # TODO: We might be able to use IP ranges to give IPs custom tags. (e.g. ROCKSTAR, UNKNOWN (USA), etc.) + self.add_con_stat_from_ip_tag(IPTag(ip, "UNKNOWN")) # Now that the ConnectionStat exists, we can get it + + con_stat = self.get_con_stat_from_ip(ip) + con_stat.add_packet(packet, allowed) # Pass the packet down to ConnectionStat where metrics will be calculated + """ Was I really updating a *copy* of the object, and not saving it back when necessary??? """ + self.connection_stats[self.known_ips[ip]] = con_stat + """ Sigh. I thought that con_stat would be a shallow copy, but considering + connection_stats is a proxy list (doesn't exist in this process), then *of course* + updating con_stat here without saving / 'writing' the new state back into the proxy list wouldn't + actually change the data in connection_stats. """ + + """ + Adds an IP (with tag) to connection stats. + """ + def add_con_stat_from_ip_tag(self, ip_tag): + this_ip = ip_tag.get_ip() + #print("KNOWN IPS: ", self.known_ips) + + if this_ip in self.known_ips: + return # If this IP has already been added, don't do it again. + + self.known_ips[this_ip] = len(self.connection_stats) # Add this_ip to dictionary with value of index into + self.connection_stats.append(ConnectionStats(ip_tag)) + + #print("KNOWN IPS: ", self.known_ips) + + #print("idk: ", self.connection_stats) + i = 0 + while i < len(self.connection_stats): + #print("trying to print ", i) + #print(self.connection_stats[i]) + i += 1 + + """ + Returns the connection stat object associated with this IP. + + NOTE: Will throw KeyError there is no ConnectionStat for the given ip. + """ + def get_con_stat_from_ip(self, ip): + return self.connection_stats[self.known_ips[ip]] # Use known_ips to get the index into connection_stats. + + """ + Returns the human-readable representation of the current session. + """ + + +class IPTag: + + """ + Container method for storing an IP with an arbitrary String attached. + """ + def __init__(self, ip, tag=""): + self.ip = ip + self.tag = tag + + def get_ip(self): + return self.ip + + def get_tag(self): + if isinstance(self.tag, list): + return "".join(self.tag) + + return self.tag + + def get_tag_raw(self): + return self.tag + + def set_tag(self, tag): + self.tag = tag + + +class ConnectionStats: + + """ + Stores the actual relevant information for a connection. + """ + def __init__(self, ip_tag): + self.ip_tag = ip_tag + #self.packets = Manager().list() # REALLY? THIS IS WHAT WAS BREAKING IT!!!??? + self.packets = [] + #print("__init__(): self.packets.__repr__():", self.packets.__repr__()) + self.last_seen = None # has not been seen yet + self.packets_in = 0 + self.packets_out = 0 + self.packets_allowed = 0 + self.packets_dropped = 0 + self.session_requests = 0 + + """ + Give a packet to this connection statistic so the relevant information can be stored. + """ + def add_packet(self, packet, allowed): + #print("add_packet(): self.packets.__repr__():", self.packets.__repr__()) + #print("ADDING PACKET TO LIST") + self.packets.append(packet) # For now, I'm just going to add it to the array. Actual stats can be added later. + if packet.is_outbound and packet.payload_length == 125 and not self.is_connected(3): + self.session_requests += 1 + #print("packet count: " + str(len(self.packets))) + #print("add_packet(): self.packets.__repr__():", self.packets.__repr__()) + self.last_seen = time.time() + #print("last seen: " + str(self.last_seen)) + + # Generic counters + if packet.is_inbound: + self.packets_in += 1 + elif packet.is_outbound: + self.packets_out += 1 + + if allowed: + self.packets_allowed += 1 + else: + self.packets_dropped += 1 + + """ + If we haven't seen any activity from this source in the last 'threshold' seconds, then we're not connected. + """ + def is_connected(self, threshold=5): + if self.last_seen is None: + return False + else: + return (time.time() - self.last_seen) <= threshold + + def get_last_seen_str(self): + if self.last_seen is None: + return "Never" + else: + return "".join([str(round((time.time() - self.last_seen) * 1000)), " ms ago"]) + + """ + Sometimes, a tag (or part of it) may be temporarily overridden. + Tags are either a string, or an array of strings. + Tag overrides affect the first part of a string. + Tag overrides should not affect the default / original tag for an IP. + """ + def get_tag_override(self): + tag = self.ip_tag.get_tag_raw() + override = tag + + if isinstance(tag, list): + tag = list(tag) # shallow copy (this is fine because all the elements are immutable strings) + override = tag[0] # the thing we might be overriding + if override == "LOCAL IP" or override == "PUBLIC IP": # Local / Public IP tags take precedence. + # TODO: Check if R* SERVICE + return self.ip_tag.get_tag() + if self.is_connected(): + override = "CONNECTED" + elif self.session_requests > 0: + override = "".join([str(self.session_requests), "x JOIN REQ."]) + + # If the original tag was a string then is probably overwritten. Otherwise, we replace only the first element. + if isinstance(tag, str): + tag = override + else: + tag[0] = override + + return tag + #return override if isinstance(tag, str) else tag[1::].insert(0, override) + + + """ + Returns an anonymous dictionary of information about this connection. + """ + def get_info(self): + return {'ip': self.ip_tag.get_ip(), 'tag': self.get_tag_override(), 'packet_count': len(self.packets), + 'is_connected': self.is_connected(3), 'last_seen': self.get_last_seen_str(), + 'packets_in': self.packets_in, 'packets_out': self.packets_out, 'packets_allowed': self.packets_allowed, + 'packets_dropped': self.packets_dropped} diff --git a/public_builds/guardian-3.1.0b5-fastload-fix.zip b/public_builds/guardian-3.1.0b5-fastload-fix.zip new file mode 100644 index 0000000..40c6445 Binary files /dev/null and b/public_builds/guardian-3.1.0b5-fastload-fix.zip differ diff --git a/requirements.txt b/requirements.txt index 2374c0c..681f80e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,7 @@ certifi==2019.6.16 chardet==3.0.4 colorama==0.4.1 -cx-Freeze==6.8.3 +cx-Freeze~=6.10.0 idna==2.8 prompt-toolkit==2.0.9 pydivert==2.1.0 diff --git a/setup.py b/setup.py index 67e4170..02145d6 100644 --- a/setup.py +++ b/setup.py @@ -13,10 +13,12 @@ replace_paths=[("*", "")], optimize=2, zip_include_packages="*", zip_exclude_packages=zip_exclude_packages, silent=True) executables = [ - Executable('app.py', targetName='Guardian.exe', icon='logo.ico') + Executable('app.py', targetName='Guardian.exe', icon='logo.ico', uac_admin=True) ] -version = '3.1.0b4-fastload-fix' + +version = '3.1.0b5-fastload-fix' +version_number = '3.1.0.5' build_path = 'build/exe.win-amd64-{}.{}'.format(sys.version_info.major, sys.version_info.minor) @@ -30,7 +32,7 @@ os.remove('build/exe/guardian-{}.zip'.format(version)) setup(name='Guardian', - version=version, + version=version_number, description='Firewall', options=dict(build_exe=buildOptions), executables=executables) diff --git a/util/DynamicBlacklist.py b/util/DynamicBlacklist.py index 83be719..82807bd 100755 --- a/util/DynamicBlacklist.py +++ b/util/DynamicBlacklist.py @@ -4,6 +4,7 @@ import json # to parse the file once it's been downloaded from sys import getsizeof # for debug testing to determine the size of certain things import time # timing +from pathlib import Path """ This file contains classes and methods to manage acquiring, parsing, and updating a possibly dynamic list of IP ranges @@ -100,17 +101,43 @@ def generate_all_cidr_containing_ip(ip, min_cidr=0): return ips -def parse_azure_ip_ranges(url_to_json_file): +def parse_azure_ip_ranges_from_url(url_to_json_file): """ Given a Microsoft Azure IP .JSON file, parses the file and returns an array of strings of CIDR ranges that may be used by R* Services. """ response = requests.get(url_to_json_file) response.raise_for_status() # Can't handle anything here. If we can't download the file, it's game over. - # TODO: Using reverse_search_ip_in_azure() indicates that R* Services use the generic 'AzureCloud' category. - # A bit boring but to be expected and hey, at least they're actually in the file. - # So, need to get the address ranges (they're CIDR) from that category and return a set of IPs to compare against. - azure_cloud_json = json.loads(response.content) + + return parse_azure_ip_ranges(response.content) # Parse the response and return it to be saved. + + +def get_azure_ip_file_from_url(url_to_json_file): + # TODO: Provide some sanity checks to see if the file contains the content we expect. + response = requests.get(url_to_json_file) + response.raise_for_status() + return response.content + + +def save_azure_file(data_to_save, where_to_save="db.json"): + file = open(where_to_save, mode="wb") + bytes_written = file.write(data_to_save) + file.close() + return bytes_written + + +def azure_file_add_timestamp(azure_file, filename): + as_list = azure_file.splitlines(True) # keep the line breaks + #print(as_list) + now = str(time.time()) + # add timestamp and filename (should be formatted the same as the actual file) + as_list.insert(1, b' "acquiredFrom": "' + bytes(filename, 'utf-8') + b'",\n') + as_list.insert(1, b' "acquiredWhen": ' + bytes(now, 'utf-8') + b',\n') + return b''.join(as_list)# if type(azure_file) is bytes else "".join(as_list) + + +def parse_azure_ip_ranges(azure_file): + azure_cloud_json = json.loads(azure_file) # load the .json file into memory categories = azure_cloud_json['values'] arr_ranges = None for cat in categories: @@ -119,11 +146,16 @@ def parse_azure_ip_ranges(url_to_json_file): break if arr_ranges is None: raise ValueError("Could not find AzureCloud category in values array.") - #ips = get_all_ips_from_cidr_array(arr_ranges) - #return ips + # ips = get_all_ips_from_cidr_array(arr_ranges) + # return ips return arr_ranges +def parse_azure_ip_ranges_from_file(location_of_file): + file = open(location_of_file, mode='rb') + return parse_azure_ip_ranges(file.read()) + + def cidr_to_tuple(ip_in_cidr): """ Converts a string representing an IP in CIDR notation to two integers, @@ -167,9 +199,32 @@ def construct_cidr_block_set(ips_in_cidr): return ip_set -def get_dynamic_blacklist(): - download_link = get_azure_ip_ranges_download() - ranges = parse_azure_ip_ranges(download_link[0]) # TODO: Handle multiple download files! + +def get_dynamic_blacklist(backup_file="db.json"): + # TODO: It seems like we can determine if a range has changed by looking at the 'changeNumber' attribute + # for a given category, however, there unfortunately doesn't appear to be any sort of timestamp included + # in the actual JSON file. We'll probably need to save the timestamp manually by adding it to the JSON? + # TL;DR the problem is that we can tell if the file has been updated by checking `changeNumber`, but that requires + # attempting to download the file anyways. Ideally, we want to be able to skip trying to download all together + # because the method isn't entirely reliable, and also fallback to the previously saved version if the download + # fails. + #ranges = set() + + try: + download_link = get_azure_ip_ranges_download() + content = get_azure_ip_file_from_url(download_link[0]) # TODO: Handle multiple download files! + ranges = parse_azure_ip_ranges(content) + # TODO: If we get multiple files, we can try to find the one with the highest changeNumber. + # If we got here, then the ranges are *probably* okay. + save_azure_file(azure_file_add_timestamp(content, download_link[0]), backup_file) + except Exception as e: + print("ERROR: Could not parse Azure ranges from URL. Reason:", e) + try: + ranges = parse_azure_ip_ranges_from_file(backup_file) + except FileNotFoundError as e: + print("ERROR: Could not find backup file.") + raise e + ranges.extend(T2_EU) # add R* EU ranges ranges.extend(T2_US) # add R* US ranges dynamic_blacklist = construct_cidr_block_set(ranges) @@ -242,11 +297,12 @@ def get_cidr_suffixes(array_of_cidr): if __name__ == "__main__": #print(get_all_ips_from_cidr("185.56.64.0/24")) #print(len(get_all_ips_from_cidr_array(["185.56.64.0/24", "185.56.64.0/22"]))) - dl = get_azure_ip_ranges_download() - print(dl) - start = time.perf_counter() - ips_test = parse_azure_ip_ranges(dl[0]) - finish = time.perf_counter() - print("size:", getsizeof(ips_test), "len:", len(ips_test), "seconds:", (finish - start) / 1000) + #dl = get_azure_ip_ranges_download() + #print(dl) + #start = time.perf_counter() + #ips_test = parse_azure_ip_ranges_from_url(dl[0]) + #finish = time.perf_counter() + #print("size:", getsizeof(ips_test), "len:", len(ips_test), "seconds:", (finish - start) / 1000) + get_dynamic_blacklist("db_test.json") # size: 1073742040 len: 21838185, time: like 90 minutes or something, shouldn't have used perf counter here I guess diff --git a/util/RollingAverage.py b/util/RollingAverage.py new file mode 100755 index 0000000..350d609 --- /dev/null +++ b/util/RollingAverage.py @@ -0,0 +1,79 @@ +""" +A simple maths class that helps calculate a Rolling Average of at most n values. +""" + + +class RollingAverage: + + def __init__(self, max_dp=100): + """ + max_dp: The most amount of data points that will be used for the rolling average. + """ + if not max_dp > 0: + raise ValueError("Cannot create a rolling average of " + str(max_dp) + " data points" + " (must be greater than 0).") + + self.storage = [0]*max_dp # The storage of all the data points. + self.next_idx = 0 # The next index in the circular storage. + self.is_full = False # If we have reached the maximum amount of data points for our rolling average. + self.result = 0 # The actual rolling average. + + def add_value(self, value): + """ + Add a data point to the rolling average, returning the new average. + """ + if self.is_full: # Simple moving average. Need to look at value at current index before overriding it. + self.result += (value - self.storage[self.next_idx]) / (len(self.storage)) # Get the delta div. max_dp. + else: # Cumulative moving average. Our n is not yet at maximum, and no array access is necessary. + self.result += (value - self.result) / (self.next_idx + 1) + + self.storage[self.next_idx] = value # Store this value on the buffer. + self.next_idx = self.__get_next_idx() # Calculate the index for next time. + return self.get_avg() # Return the new average. + + def __get_next_idx(self): + """ + Calculates the next index into self.storage. + """ + nxt = (self.next_idx + 1) % len(self.storage) + + if nxt == 0: # We looped back to the start of the circular buffer, meaning we've run out of space. + self.is_full = True # Is assigning a value computationally cheaper than comparing a value? + + return nxt + + def get_avg(self): + return self.result + + def __str__(self): + return str(self.get_avg()) + + +if __name__ == "__main__": + rl = RollingAverage(100) + print(rl.add_value(50)) # 50.0 + print(rl.add_value(50)) # 50.0 + print(rl.add_value(0)) # 33.3333... (because 100 / 3) + print(rl) # 33.3333... (should still be the same) + rl.add_value(0) + print(rl.get_avg()) # should be 25.0, but is actually 24.99999999... due to floating point error. + + one = RollingAverage(1) + print(one.add_value(999)) # 999.0 + if not one.is_full: + raise AttributeError("Rolling Average 'one' is meant to be marked as full but is not.") + print(one.add_value(-2)) # -2.0 (the average of 1 data-point is just that data-point) + + try: + RollingAverage(0) # A rolling average of 0 data points does not make sense. Attempting to do so + except ValueError as e: # would create ZeroDivisionError or IndexError exceptions at some point. + print(e) + else: + raise RuntimeError("TEST FAIL: Created a Rolling Average of 0 data points.") + + try: + RollingAverage(-7) # A rolling average of a negative amount of data points also does not make sense. + except ValueError as e: + print(e) + else: + raise RuntimeError("TEST FAIL: Created a Rolling Average of -7 data points.") diff --git a/util/WorkingDirectoryFix.py b/util/WorkingDirectoryFix.py new file mode 100644 index 0000000..60e6395 --- /dev/null +++ b/util/WorkingDirectoryFix.py @@ -0,0 +1,22 @@ +""" +Simple workaround that sets the current working directory to wherever Guardian.exe exists, +instead of wherever the console was launched from. Functions like open() use the working directory +as a base for any local paths specified, so if the working directory isn't as expected then files get +saved in the wrong location (e.g. on the Desktop if you launched Guardian from a console that had its' +path currently at the Desktop). +""" + +from sys import argv +from os import chdir # chdir to change working directory + + +def wd_fix(): + # if argv is empty for some reason then can't fix + if len(argv) < 1: + return False + + path_to_exe = argv[0] + try: + chdir(path_to_exe[:path_to_exe.rindex("\\")]) # "go up one folder" by removing the last folder from the path + except (OSError, ValueError): + return False