What about a new forum category?

With access to Claude, ChatGPT, Perplexity, Copilot, Poe, Bard, Quodo, and Codeium, I assume that the production of code from non-coders in the forum will significantly increase in the future. Much of this code will be incredibly interesting for others to access and learn from. It also helps to make the platform more engaging, practical, and to utilize the data we have available much better.

It may be a good idea to create a category in the forum dedicated solely to sharing code with others. This can include both useful and less useful code.

What I have noticed is that there is a considerable amount of code on the forum, but it is located in various places and often somewhat randomly intermixed with a range of other topics. I believe that many have created some practical code snippets that others could benefit from.

Some examples include (but there are many more):

  • I have created a script that visits the Portfolio123 website every week, performs a screen for me, and imports the 500 best stocks for the USA and EU, marking those I own against this list, which is then saved into a spreadsheet. The purpose is to preserve it for posterity and to automate the process.
  • The second script converts HTML that I retrieved from here (including the subpages) https://www.portfolio123.com/doc/doc_index.jsp into PDF format, which is then split into 10 PDF files for uploading to Google NotebookLM.
  • The script searches for reasonably priced out-of-the-money put options, based on stocks I believe are candidates for shorting.
  • It sorts active node criteria in the ranking system from highest to lowest.

And so on... I am still working on a functioning machine learning code and will publish it when I am finished.

from datetime import datetime
from playwright.async_api import async_playwright
import openpyxl
from openpyxl.styles import PatternFill, Font, Alignment, Border, Side
from openpyxl.utils import get_column_letter
from bs4 import BeautifulSoup
import re
import os

# Selskapsnavn liste
all_names = """
Amerigo Resources Ltd.
BARK, Inc.
Brightcove, Inc.
Ceragon Networks Ltd.
Expensify, Inc.
Jaguar Mining, Inc.
Kamada Ltd.
Knight Therapeutics, Inc.
Nature's Sunshine Products, Inc.
Perma-Pipe International Holdings, Inc.
Pro-Dex, Inc.
Rigel Pharmaceuticals, Inc.
Viant Technology, Inc.
WM Technology, Inc.
Multiconsult ASA
SP Group A/S
Bittium Oyj
Aramis Group SAS
Terveystalo Oyj
MT Hojgaard Holding A/S
Bonheur ASA
Grupo Empresarial San José SA
Humana AB
Pexip Holding ASA
Alzchem Group AG
Akastor ASA
M1 Kliniken AG
RaySearch Laboratories AB
Verve Group SE
SELL US CAN RankPos>50 AND NoBars>12
SELL EU RankPos > 40 AND NoDays>6

________________________
USA- KRITERIER
`

""".strip().split("\n")
def normalize_name(name):
    return name.strip().lower()

def clean_ticker(ticker):
    match = re.match(r'([A-Z]+:[A-Z]+)', ticker)
    if match:
        return match.group(1)
    return ticker

def remove_duplicate_tickers(ticker):
    tickers = re.findall(r'\b[A-Z]+:[A-Z]+\b', ticker)
    return tickers[0] if tickers else ticker

async def run(playwright):
    browser = await playwright.chromium.launch(headless=False)
    rank_page = await browser.new_page()

    # Login
    await rank_page.goto("https://www.portfolio123.com/app/auth")
    await rank_page.fill('input[name="user"]', 'xxx')
    await rank_page.fill('input[name="passwd"]', 'xxx')
    await rank_page.click('button[type="submit"]')
    await rank_page.wait_for_load_state('networkidle')

    # Create Excel file
    file_path = f'C:/Users/mywag/Documents/YT/DATAGRUNNLAG/WSCREEN/scraped_data_{datetime.now().strftime("%Y%m%d_%H%M%S")}.xlsx'
    workbook = openpyxl.Workbook()
    sheet = workbook.active
    sheet.title = "Market Analysis"

    # Design configuration with new color scheme
    header_font = Font(name="Segoe UI", bold=True, size=12, color="222831")
    data_font = Font(name="Segoe UI Light", size=10, color="222831")
    rankpos_font = Font(name="Segoe UI Semibold", size=10, color="222831")

    # Color palette
    header_fill = PatternFill(start_color="222831", end_color="222831", fill_type="solid")
    zebra_fill1 = PatternFill(start_color="EEEEEE", end_color="EEEEEE", fill_type="solid")
    zebra_fill2 = PatternFill(start_color="FFFFFF", end_color="FFFFFF", fill_type="solid")
    rankpos_fill = PatternFill(start_color="00ADB5", end_color="00ADB5", fill_type="solid")
    highlight_fill = PatternFill(start_color="393E46", end_color="393E46", fill_type="solid")
    match_fill = PatternFill(start_color="00ADB5", end_color="00ADB5", fill_type="solid")

    # Borders and alignment
    thin_border = Border(
        left=Side(style='thin', color="393E46"),
        right=Side(style='thin', color="393E46"),
        top=Side(style='thin', color="393E46"),
        bottom=Side(style='thin', color="393E46")
    )
    header_border = Border(bottom=Side(style='medium', color="222831"))
    alignment = Alignment(horizontal="left", vertical="center", wrap_text=True)
    center_alignment = Alignment(horizontal="center", vertical="center")

    # Set column widths
    sheet.column_dimensions['A'].width = 40
    
    # Style header
    sheet["A1"] = "Company Name"
    sheet["A1"].font = Font(name="Segoe UI", bold=True, size=12, color="EEEEEE")  # White text for dark header
    sheet["A1"].fill = header_fill
    sheet["A1"].alignment = center_alignment
    sheet["A1"].border = header_border

    # Style company names
    for idx, name in enumerate(all_names, start=2):
        cell = sheet.cell(row=idx, column=1, value=name)
        cell.font = data_font
        cell.alignment = alignment
        cell.border = thin_border
        cell.fill = zebra_fill1 if idx % 2 == 0 else zebra_fill2

    async def scrape_and_highlight_data(url, sheet, start_column):
        await rank_page.goto(url)
        await rank_page.wait_for_load_state('networkidle')
        scraped_names = []

        try:
            element = rank_page.locator('//html/body/div[2]/div[4]/div/div[2]/div[2]/div[1]/div[2]/div[1]/div[2]/span[2]/a[5]')
            await element.wait_for(state="visible", timeout=10000)
            await element.click()
            await rank_page.wait_for_timeout(5000)

            dropdown_handle = await rank_page.wait_for_selector('#resultrowspp', timeout=10000)
            await dropdown_handle.click()
            await rank_page.wait_for_timeout(1000)
            await dropdown_handle.select_option(value="500")
            await rank_page.wait_for_timeout(10000)

            table_locator = rank_page.locator('//*[@id="results-table"]/table')
            await table_locator.wait_for(state="visible", timeout=20000)
            table_html = await table_locator.inner_html()
            
            soup = BeautifulSoup(table_html, 'html.parser')
            rows = soup.find_all('tr')

            # Process headers
            header_row = rows[0]
            header_cells = header_row.find_all(['th'])
            for col_idx, cell in enumerate(header_cells, start=1):
                header_cell = sheet.cell(row=1, column=col_idx + start_column)
                header_cell.value = cell.get_text(strip=True)
                header_cell.font = Font(name="Segoe UI", bold=True, size=12, color="EEEEEE")  # White text for headers
                header_cell.fill = rankpos_fill if '@RankPos' in cell.get_text() else header_fill
                header_cell.alignment = center_alignment
                header_cell.border = header_border
                
                if 'NAME' in cell.get_text().upper():
                    sheet.column_dimensions[get_column_letter(col_idx + start_column)].width = 40
                else:
                    sheet.column_dimensions[get_column_letter(col_idx + start_column)].width = 15

            # Process data rows
            for row_idx, row in enumerate(rows[1:], start=2):
                columns = row.find_all(['td'])
                for col_idx, column in enumerate(columns, start=1):
                    cell_value = column.get_text(strip=True)
                    
                    header_text = sheet.cell(row=1, column=col_idx + start_column).value
                    if header_text and 'Ticker' in header_text.upper():
                        match = re.match(r'([A-Z]+:[A-Z]+)', cell_value)
                        cell_value = match.group(1) if match else cell_value[:8]

                    cell = sheet.cell(row=row_idx, column=col_idx + start_column, value=cell_value)
                    cell.font = rankpos_font if '@RankPos' in str(sheet.cell(row=1, column=col_idx + start_column).value) else data_font
                    cell.alignment = center_alignment
                    cell.border = thin_border
                    
                    base_fill = zebra_fill1 if row_idx % 2 == 0 else zebra_fill2
                    cell.fill = base_fill

                    if '@RankPos' in str(sheet.cell(row=1, column=col_idx + start_column).value):
                        try:
                            rank_value = float(cell_value)
                            if rank_value > 40:
                                cell.fill = highlight_fill
                                cell.font = Font(name="Segoe UI", bold=True, size=10, color="EEEEEE")  # White text for highlighted cells
                        except ValueError:
                            pass

                    for name in all_names:
                        if normalize_name(name) in normalize_name(cell_value):
                            for col in range(start_column, start_column + len(columns)):
                                match_cell = sheet.cell(row=row_idx, column=col)
                                match_cell.fill = match_fill
                                match_cell.font = Font(name="Segoe UI", bold=True, size=10, color="222831")

                    scraped_names.append(normalize_name(cell_value))

            return scraped_names

        except Exception as e:
            print(f"Error during scraping: {e}")
            return scraped_names

    # Scrape data from both URLs
    scraped_names_1 = await scrape_and_highlight_data("https://www.portfolio123.com/app/screen/summary/302424?st=0&mt=1", sheet, 3)
    scraped_names_2 = await scrape_and_highlight_data("https://www.portfolio123.com/app/screen/summary/302423?st=1&mt=1", sheet, 35)

    # Combine all scraped names
    all_scraped_names = set(scraped_names_1 + scraped_names_2)

    # Mark non-matching companies
    for idx, name in enumerate(all_names, start=2):
        if normalize_name(name) not in all_scraped_names:
            cell = sheet.cell(row=idx, column=1)
            cell.fill = PatternFill(start_color="222831", end_color="222831", fill_type="solid")
            cell.font = Font(name="Segoe UI Light", size=10, italic=True, color="EEEEEE")  # White text for dark background

    # Add legend box
    legend_row = sheet.max_row + 2
    legend_styles = [
        ("Matched Companies", "00ADB5"),
        ("High RankPos (>40)", "393E46"),
        ("Normal Entries", "EEEEEE"),
        ("Non-matched Companies", "222831")
    ]

    sheet.cell(row=legend_row, column=1, value="LEGEND:").font = Font(name="Segoe UI", bold=True, size=12, color="222831")
    
    for idx, (text, color) in enumerate(legend_styles):
        cell = sheet.cell(row=legend_row + idx + 1, column=1, value=text)
        cell.fill = PatternFill(start_color=color, end_color=color, fill_type="solid")
        # Adjust text color based on background for readability
        text_color = "EEEEEE" if color in ["222831", "393E46"] else "222831"
        cell.font = Font(name="Segoe UI Light", size=10, color=text_color)
        cell.alignment = alignment

    # Save and open Excel file
    workbook.save(file_path)
    print(f"Data has been retrieved and saved to {file_path}")
    os.startfile(file_path)

    await browser.close()

async def main():
    async with async_playwright() as playwright:
        await run(playwright)

if __name__ == "__main__":
    asyncio.run(main())

from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import landscape
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
from reportlab.lib import colors
from PyPDF2 import PdfReader, PdfWriter
import os
from bs4 import BeautifulSoup

# Custom extra wide page size (width, height in points)
CUSTOM_SIZE = (2000, 1000)  # Much wider page

# Filstier
html_file = r'C:/Userxxxx G.html'
pdf_file = r'C:/Users/myxxxx NG.pdf'
output_dir = r'C:/Users/xxxx FILES/'

# Opprett utgangsmappen
os.makedirs(output_dir, exist_ok=True)

# Registrer Arial font
pdfmetrics.registerFont(TTFont('Arial', 'arial.ttf'))

# Les HTML og konverter til PDF
print("Konverterer HTML til PDF...")
with open(html_file, 'r', encoding='utf-8') as file:
    soup = BeautifulSoup(file, 'html.parser')

# Lag PDF med ekstra bred størrelse
pagesize = CUSTOM_SIZE
c = canvas.Canvas(pdf_file, pagesize=pagesize)
width, height = pagesize
y = height - 50
x = 50

# Stil-innstillinger
c.setFont('Arial', 10)
line_height = 14
max_width = width - 100  # Mye større arbeidsområde

# Prosesser HTML elementer
for element in soup.find_all(['p', 'h1', 'h2', 'h3', 'table']):
    tag_name = element.name
    text = element.get_text().strip()
    
    if tag_name.startswith('h'):
        c.setFont('Arial', 14)
        if y < 50:
            c.showPage()
            y = height - 50
        c.drawString(x, y, text)
        y -= line_height * 1.5
        c.setFont('Arial', 10)
    
    elif tag_name == 'table':
        rows = element.find_all('tr')
        for row in rows:
            cols = row.find_all(['td', 'th'])
            col_x = x
            for col in cols:
                if y < 50:
                    c.showPage()
                    y = height - 50
                    col_x = x
                text = col.get_text().strip()
                # Extra wide columns
                col_width = max(200, len(text) * 8)
                c.drawString(col_x, y, text)
                col_x += col_width
            y -= line_height
    
    else:  # Normal tekst
        words = text.split()
        line = []
        for word in words:
            line.append(word)
            line_text = ' '.join(line)
            if c.stringWidth(line_text) > max_width:
                if y < 50:
                    c.showPage()
                    y = height - 50
                c.drawString(x, y, ' '.join(line[:-1]))
                y -= line_height
                line = [word]
        
        if line:
            if y < 50:
                c.showPage()
                y = height - 50
            c.drawString(x, y, ' '.join(line))
            y -= line_height * 1.2

c.save()
print(f"PDF lagret: {pdf_file}")

# Del PDF i 10 filer
print("Deler PDF-filen i 10 deler...")
reader = PdfReader(pdf_file)
total_pages = len(reader.pages)
pages_per_part = total_pages // 10

for i in range(10):
    writer = PdfWriter()
    start_page = i * pages_per_part
    end_page = start_page + pages_per_part if i < 9 else total_pages
    
    for page_num in range(start_page, end_page):
        writer.add_page(reader.pages[page_num])
    
    output_pdf = os.path.join(output_dir, f"DEL_{i+1}.pdf")
    with open(output_pdf, "wb") as output_file:
        writer.write(output_file)
    print(f"Laget fil: {output_pdf}")

print("Ferdig med å dele PDF i 10 filer.")
"""Organisere XML-dokumentet etter vekting
Sikre konsistent og forutsigbar sortering
Forberede data for videre analyse"""

import re
from lxml import etree

def escape_special_chars_in_formula(xml_text):
    """
    Escaper spesialtegn (<, >, &) innenfor <Formula> tagger.
    """
    pattern = r'(<Formula>)(.*?)(</Formula>)'
    
    def replacer(match):
        start, content, end = match.groups()
        # Erstatt & først for å unngå dobbeltescaping
        content = content.replace('&', '&amp;').replace('<', '&lt;').replace('>', '&gt;')
        return f"{start}{content}{end}"
    
    return re.sub(pattern, replacer, xml_text, flags=re.DOTALL)

def sort_stock_nodes(element):
    """
    Rekursivt sorterer <StockFormula> og <StockFactor> child noder
    basert på deres Weight attributt i synkende rekkefølge.
    """
    for child in element:
        # Rekursivt kall for å sortere underliggende noder
        sort_stock_nodes(child)
    
    # Finn alle StockFormula og StockFactor noder under dette elementet
    stock_nodes = [child for child in element if child.tag in ['StockFormula', 'StockFactor']]
    
    if stock_nodes:
        # Sorter nodene basert på Weight i synkende rekkefølge
        sorted_nodes = sorted(
            stock_nodes,
            key=lambda x: float(x.get('Weight', 0)),
            reverse=True
        )
        
        # Fjern de eksisterende nodene
        for node in stock_nodes:
            element.remove(node)
        
        # Legg til de sorterte nodene tilbake til forelderen
        for node in sorted_nodes:
            element.append(node)

def pretty_print_xml(tree, output_file):
    """
    Skriver XML-treet til output-fil med fin formattering.
    """
    tree.write(output_file, pretty_print=True, xml_declaration=True, encoding='utf-8')

def main():
    input_file = r"C:/Usersxxxxx /TEST/VEKTING.txt"
    output_fixed_file = r"C:/Users/mywaxxxxx NG_fixed.txt"
    output_sorted_file = r"C:/Users`xxx `txt"
    
    try:
        # Les XML-filen som tekst
        with open(input_file, 'r', encoding='utf-8') as file:
            xml_text = file.read()
        
        # Fiks spesialtegn innenfor <Formula> tagger
        xml_text = escape_special_chars_in_formula(xml_text)
        
        # Skriv den korrigerte XML-en til en ny fil
        with open(output_fixed_file, 'w', encoding='utf-8') as file:
            file.write(xml_text)
        
        print(f"Korrigert XML skrevet til '{output_fixed_file}'.")
        
        # Parse den korrigerte XML-en
        parser = etree.XMLParser(remove_blank_text=True)
        tree = etree.parse(output_fixed_file, parser)
        root = tree.getroot()
        
        # Sorter nodene
        sort_stock_nodes(root)
        
        # Skriv den sorterte XML-en til en ny TXT-fil
        pretty_print_xml(tree, output_sorted_file)
        
        print(f"Sortert XML skrevet til '{output_sorted_file}' successfully.")
    
    except etree.XMLSyntaxError as e:
        print(f"XML Syntax Error: {e}")
    except FileNotFoundError:
        print(f"Feil: Filen '{input_file}' ble ikke funnet.")
    except Exception as e:
        print(f"En uventet feil oppsto: {e}")

if __name__ == "__main__":
    main()
import datetime as dt
import pandas as pd
import yfinance as yf
import requests
from bs4 import BeautifulSoup
import numpy as np
from scipy.stats import norm
from concurrent.futures import ThreadPoolExecutor

def parse_scores(score_str):
    scores = {}
    lines = score_str.strip().split('\n')
    for line in lines:
        ticker, score = line.split()
        scores[ticker] = float(score)
    return scores

def get_tickers(score_str):
    scores = parse_scores(score_str)
    return list(scores.keys()), scores

# Eksempel på input-streng kopiert fra Excel
score_str = """
ENVX	1.00
IBRX	2.00
SKYH	3.00
NOVA	4.00
IOVA	5.00
QURE	6.00
EU	7.00
AMTX	8.00
PHAT	9.00
KRUS	10.00
CLDX	11.00
MP	12.00
IGMS	13.00
WOLF	14.00
XOMA	15.00
CDZI	16.00
ZVRA	17.00
NPWR	18.00
BETR	19.00
RUN	20.00
APLD	21.00
MESO	22.00
CRMD	23.00
TSVT	24.00
MGTX	25.00
LQDA	26.00
VRNA	27.00
PRME	28.00
NNOX	29.00
CLNN	30.00
OABI	31.00
CTOS	32.00
GLUE	33.00
ADD	34.00
PLL	35.00
UUUU	36.00
BCYC	37.00
FTEL	38.00
KYMR	39.00
ARWR	40.00
IMNM	41.00
LAZR	42.00
TBPH	43.00
YMAB	44.00
ATEX	45.00
LZM	46.00
UEC	47.00
SERA	48.00
VRDN	49.00
ANAB	50.00
NEOG	51.00
SPRY	52.00
TCRX	53.00
ALT	54.00
RXRX	55.00
RNA	56.00
SRG	57.00
ASND	58.00
SWTX	59.00
FCEL	60.00
NTLA	61.00
TARS	62.00
WULF	63.00
ATYR	64.00
GERN	65.00
SCPH	66.00
SGMT	67.00
NN	68.00
MDGL	69.00
AUTL	70.00
TGTX	71.00
EVLV	72.00
RWT	73.00
CIFR	74.00
SNDX	75.00
MSTR	76.00
CYTK	77.00
JANX	78.00
CRSP	79.00
MYO	80.00
SLN	81.00
STOK	82.00
CORZ	83.00
BBIO	84.00
LEGN	85.00
RCEL	86.00
RDW	87.00
SPIR	88.00
CCJ	89.00
ONIT	90.00
RYTM	91.00
BOWL	92.00
MARA	93.00
SOC	94.00
PFMT	95.00
BYRN	96.00
DNTH	97.00
RGNX	98.00
FHTX	99.00
"""

# Hent tickers og scores
tickers, scores = get_tickers(score_str)

# Normaliser scores
max_score = max(scores.values())
min_score = min(scores.values())
normalized_scores = {ticker: (score - min_score) / (max_score - min_score) for ticker, score in scores.items()}

def process_expiration(ticker, exp_td_str):
    options = ticker.option_chain(exp_td_str)
    puts = options.puts
    puts['optionType'] = 'P'
    puts['expiration'] = exp_td_str
    puts['volume'] = options.puts['volume']  # Legg til volumkolonnen
    return puts

def get_otm_puts(data, current_price):
    otm_puts = data[data['strike'] < current_price].copy()
    return otm_puts

def filter_expirations(expirations, min_months=3, max_months=9):
    now = dt.datetime.now()
    min_date = now + dt.timedelta(days=30 * min_months)
    max_date = now + dt.timedelta(days=30 * max_months)
    filtered_expirations = [
        exp for exp in expirations
        if min_date <= dt.datetime.strptime(exp, "%Y-%m-%d") <= max_date
    ]
    return filtered_expirations

def scrape_implied_volatility(symbol, expiration_date):
    url = f"https://finance.yahoo.com/quote/{symbol}/options?p={symbol}&date={int(expiration_date.timestamp())}"
    response = requests.get(url)
    soup = BeautifulSoup(response.text, 'html.parser')

    iv_dict = {}
    rows = soup.find_all('tr')
    for row in rows:
        cols = row.find_all('td')
        if len(cols) > 10:
            try:
                option_strike = float(cols[2].text.replace(',', ''))
                option_type = cols[0].text.strip()
                iv_text = cols[10].text.strip('%')
                implied_volatility = float(iv_text)
                if option_type == 'Put':
                    iv_dict[option_strike] = implied_volatility
            except (ValueError, IndexError):
                continue
    return iv_dict

def calculate_historical_volatility(symbol, window='6mo'):
    tk = yf.Ticker(symbol)
    hist = tk.history(period=window)
    hist['LogReturn'] = np.log(hist['Close'] / hist['Close'].shift(1))
    hist_vol = hist['LogReturn'].std() * np.sqrt(252) * 100  # Annualiser og konverter til prosent
    return hist_vol

def calculate_percentage_deviation(historical_vol, implied_vol):
    if pd.isna(implied_vol) or pd.isna(historical_vol) or implied_vol == 0:
        return np.nan
    return ((historical_vol - implied_vol) / implied_vol) * 100

def calculate_black_scholes(S, K, T, r, sigma):
    d1 = (np.log(S / K) + (r + 0.5 * sigma ** 2) * T) / (sigma * np.sqrt(T))
    d2 = d1 - sigma * np.sqrt(T)
    call_price = (S * norm.cdf(d1) - K * np.exp(-r * T) * norm.cdf(d2))
    return call_price

def process_ticker(symbol):
    tk = yf.Ticker(symbol)
    expirations = tk.options
    filtered_expirations = filter_expirations(expirations, min_months=3, max_months=9)
    data = pd.DataFrame()

    for exp_td_str in filtered_expirations:
        exp_data = process_expiration(tk, exp_td_str)
        data = pd.concat(objs=[data, exp_data], ignore_index=True)

    if 'strike' not in data.columns:
        print(f"Advarsel: 'strike'-kolonne mangler for {symbol}")
        return pd.DataFrame()

    data['underlyingSymbol'] = symbol
    current_price = tk.history(period="1d")['Close'].iloc[-1]
    otm_puts = get_otm_puts(data, current_price)

    if otm_puts.empty:
        print(f"Advarsel: Ingen OTM puts for {symbol} ved nåværende pris {current_price}")
        return pd.DataFrame()

    for exp_td_str in filtered_expirations:
        expiration_date = dt.datetime.strptime(exp_td_str, "%Y-%m-%d")
        iv_dict = scrape_implied_volatility(symbol, expiration_date)
        for idx, row in otm_puts.iterrows():
            strike_price = row['strike']
            if strike_price in iv_dict:
                otm_puts.loc[idx, 'impliedVolatility'] = iv_dict[strike_price]

    hist_vol = calculate_historical_volatility(symbol)
    otm_puts['historicalVolatility'] = hist_vol

    # Konverter implied volatility til samme format som historical volatility
    otm_puts['impliedVolatility'] = otm_puts['impliedVolatility'] * 100

    # Legg til en ny kolonne 'volatilityDifference' for å beregne forskjellen mellom historisk og implisert volatilitet
    otm_puts['volatilityDifference'] = otm_puts['historicalVolatility'] - otm_puts['impliedVolatility']

    # Legg til original score kolonne
    otm_puts['originalScore'] = scores.get(symbol, np.nan)

    # Opprett en ny kolonne 'adjustedVolatilityDifference' med kombinert score
    if symbol in normalized_scores:
        otm_puts['adjustedVolatilityDifference'] = otm_puts['volatilityDifference'] * (1 + 0.1 * normalized_scores[symbol])

    # Legg til nye kolonner for Black-Scholes beregning
    otm_puts['stockPrice'] = current_price
    otm_puts['exercisePrice'] = otm_puts['strike']
    otm_puts['timeToMaturity'] = (pd.to_datetime(otm_puts['expiration']) - dt.datetime.now()).dt.days / 365.25
    otm_puts['riskFreeRate'] = 0.05  # Anta 5% risikofri rente for enkelhet
    otm_puts['annualizedVolatility'] = otm_puts['impliedVolatility'] / 100

    # Beregn Black-Scholes verdi
    otm_puts['blackScholes'] = otm_puts.apply(
        lambda row: calculate_black_scholes(
            row['stockPrice'], row['exercisePrice'], row['timeToMaturity'],
            row['riskFreeRate'], row['annualizedVolatility']
        ), axis=1
    )

    # Beregn prosentvis forskjell mellom Black-Scholes verdi og markedspris
    otm_puts['PctDiff'] = ((otm_puts['blackScholes'] - otm_puts['lastPrice']) / otm_puts['lastPrice']) * 100

    return otm_puts

# Bruk ThreadPoolExecutor for å behandle flere tickers parallelt
with ThreadPoolExecutor(max_workers=10) as executor:
    results = list(executor.map(process_ticker, tickers))

# Konkatenere alle resultater til en enkelt DataFrame
all_otm_puts = pd.concat(results, ignore_index=True)

# Beregn prosentvis avvik
all_otm_puts['percentageDeviation'] = all_otm_puts.apply(
    lambda row: calculate_percentage_deviation(row['historicalVolatility'], row['impliedVolatility']),
    axis=1
)

# Sikre at 'adjustedVolatilityDifference'-kolonnen ikke inneholder NaN-verdier
all_otm_puts = all_otm_puts.dropna(subset=['adjustedVolatilityDifference'])

# Sikre ingen tom DataFrame før bruk av idxmax
if not all_otm_puts.empty:
    highlighted_otm_puts = all_otm_puts.loc[all_otm_puts.groupby('underlyingSymbol')['adjustedVolatilityDifference'].idxmax()]
else:
    highlighted_otm_puts = pd.DataFrame()

# Sorter de uthevede opsjonene etter adjustedVolatilityDifference i synkende rekkefølge
highlighted_otm_puts = highlighted_otm_puts.sort_values(by='adjustedVolatilityDifference', ascending=False)

# Drop unødvendige kolonner for visning, inkludert 'lastTradeDate'
columns_to_drop = ['bid', 'ask', 'change', 'percentChange', 'openInterest', 'inTheMoney', 'contractSize', 'lastTradeDate']
highlighted_otm_puts = highlighted_otm_puts.drop(columns=columns_to_drop, errors='ignore')
all_otm_puts = all_otm_puts.drop(columns=columns_to_drop, errors='ignore')

# Reorganiser kolonner
cols = list(highlighted_otm_puts.columns)
new_order = ['contractSymbol', 'strike', 'volume', 'volatilityDifference', 'originalScore', 'PctDiff'] + [col for col in cols if col not in ['contractSymbol', 'strike', 'volume', 'volatilityDifference', 'originalScore', 'PctDiff']]
highlighted_otm_puts = highlighted_otm_puts[new_order]

# Funksjon for å fargelegge PctDiff kolonnen
def color_pct_diff(val):
    color = 'red' if val < 0 else 'green'  # Rød for negativ, Grønn for positiv
    return f"color: {color}"

# Funksjon for å fargelegge volatilityDifference kolonnen
def color_volatility_difference(val):
    color = 'red' if val < 0 else 'green'  # Rød for negativ, Grønn for positiv
    return f"color: {color}"

# Funksjon for å fargelegge originalScore kolonnen
def color_original_score(val):
    color = 'blue' if val > 90 else 'black'  # Blå for score høyere enn 90
    return f"color: {color}"

# Bruk stil på dataene
styled_highlighted = highlighted_otm_puts.style.applymap(color_pct_diff, subset=['PctDiff'])
styled_highlighted = styled_highlighted.applymap(color_volatility_difference, subset=['volatilityDifference'])
styled_highlighted = styled_highlighted.applymap(color_original_score, subset=['originalScore'])

# Ekstra stil for et mer profesjonelt utseende
styled_highlighted = styled_highlighted.set_table_styles(
    [
        {'selector': 'th', 'props': [('background-color', '#f7f7f9'), ('color', '#333'), ('font-weight', 'bold'), ('border', '1px solid #ddd')]},
        {'selector': 'td', 'props': [('border', '1px solid #ddd')]},
        {'selector': 'tr:nth-child(even)', 'props': [('background-color', '#f9f9f9')]},
        {'selector': 'tr:nth-child(odd)', 'props': [('background-color', '#ffffff')]},
        {'selector': 'tr:hover', 'props': [('background-color', '#f1f1f1')]}
    ]
).set_properties(**{'text-align': 'center'}).format("{:.2f}", subset=pd.IndexSlice[:, ['strike', 'volume', 'lastPrice', 'impliedVolatility', 'historicalVolatility', 'adjustedVolatilityDifference', 'stockPrice', 'exercisePrice', 'timeToMaturity', 'riskFreeRate', 'annualizedVolatility', 'volatilityDifference', 'originalScore', 'PctDiff', 'blackScholes', 'percentageDeviation']])

# Sett visningsalternativer
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.colheader_justify', 'center')

# Vis den stiliserte DataFrame
print("\nHøyeste scorede opsjoner for hver ticker (sortert etter adjustedVolatilityDifference):\n")
display(styled_highlighted)

print("\nAlle OTM puts:\n")
display(all_otm_puts.style.format("{:.2f}", subset=pd.IndexSlice[:, ['strike', 'volume', 'lastPrice', 'impliedVolatility', 'historicalVolatility', 'adjustedVolatilityDifference', 'stockPrice', 'exercisePrice', 'timeToMaturity', 'riskFreeRate', 'annualizedVolatility', 'volatilityDifference', 'originalScore', 'PctDiff', 'blackScholes', 'percentageDeviation']]))

all_otm_puts.to_csv('otm_put_options.csv', index=False)
highlighted_otm_puts.to_csv('highlighted_otm_put_options.csv', index=False)

````
2 Likes

You could use Projects in Claude 3.5 or Canvas in Chat GPT to facilitate cooperation among members. It would be too expensive for P123 to do this but if you are interested enough in a coding result you could input member ideas into Claude 3.5 projects yourself, share the output in the forum and input further ideas until everyone is satisfied with the code.

This kind of organized cooperation could possibly generate some additional interest in your idea as the cooperation of the members and the LLM might produce some polished code as well as objective discussions on the usefulness of the final code results.

ChatGPT Canvas with input and interaction from multiple P123 members would also be consideration. Too bad P123 cannot afford it, I believe. It does seem expensive as commercial API.

Members would have to mentor their original coding ideas in the forum thread and use their private accounts. Be responsible for interacting with Claude 3 and posting the interim/final results with observations and suggestions for new ideas. With the forum as it is, the use case for the code would have to have wide general interest and be difficult enough to code requiring cooperation among members.

I guess a this is basically a question of how good something like Claude 3 projects would be at maintaining simple open-source coding. GitHub has a coding AI now I think. Perhaps it could be used. I have not tried it and I could be wrong.

I would be interested in a dedicated code section like that.

I use another software called ReatTest that is pretty much a purely technical system. Some members used the enitire forum as well as the user guide and a some sample scripts to train chat gpt and it works pretty well. For people like me who don’t have a coding background it is really useful. Plus when I come here I often don’t get an answer