Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open to Collab
15
3
21
AbstractPhila
PRO
AbstractPhil
Follow
Thomas2895's profile picture
garcia76342's profile picture
SeaWolf-AI's profile picture
81 followers
·
101 following
https://civitai.com/user/AbstractPhila
AbstractEyes
AI & ML interests
datasets, research papers, experimentation, vision, classification, text encoders, tokenization, llms, diffusion, distillation, and more.
Recent Activity
published
a model
43 minutes ago
AbstractPhil/eig-triton
published
a model
44 minutes ago
AbstractPhil/eigh-triton
replied
to
their
post
about 1 hour ago
My heavily engineered repo; https://github.com/AbstractEyes/pytorch-parallel-compiler has been directly integrated into the geofractal repo for v1.2, if you use the geofractal repo be sure to pull for potential performance increases. The WideRouter will enable multiple core new features; the predominant two for our next experiment are as follows. 1. Directly integrated multi-opinion constellation structures. This will enable dynamic compiled expansions internally within the structure for huge performance gains. 2. Controllable stage-by-stage compilation. Each stage can be compiled or not. SVD being notoriously non-compiler friendly due to the linalg.egens, I will be addressing this particular function DIRECTLY soon. There will be no quarter for graph breaks. If the WideRouter causes any major bugs or breaks with your code, bad calculations, incorrect deviated gradients, twisted or contorted dtype outputs, or any major compilation errors; please don't hesitate to open a pull request. Claude and I will abruptly solve any major issues. Once everything is perfectly in-line and the graph matches, the transformer will have massive geometric performance boosts for huge structural basins with multiple layers of depth. I will be addressing the linalg.eig+eigh directly in conjunction with multiple argsort functions that are causing huge performance dips. As well as addressing every single use of .item() that can present itself in the compiler's path. After this, the ensemble topological transformer will be a-go. Which will enable quaternion, FlowMagnitude, FlowAlignment, FlowVelocity, FlowVelocityQuaternion, FlowVelocityOrbital, FlowVelocityPentachoron, and multiple other flow matching systems that will improve performance by dominating amounts inline with minimal overhead cost due to the precomputed geometric structure. The ensembles will feature multiple simultaneous batched and segmented forms of learning meant to train the oscillation omega predictor "Beatrix".
View all activity
Organizations
AbstractPhil
's models
158
Sort:Â Recently updated
AbstractPhil/OMEGA-BIGASP
Updated
Apr 2, 2025
•
3
AbstractPhil/PONY-SIM-V4
Updated
Mar 28, 2025
•
1
AbstractPhil/SIM-V5
Updated
Mar 27, 2025
•
1
AbstractPhil/SDXL-SIM-REFINER
Updated
Mar 16, 2025
AbstractPhil/SDXL-SIM_NAI-VPRED
Updated
Mar 16, 2025
AbstractPhil/SDXL-Simulacrum-V3-1
0.2B
•
Updated
Mar 3, 2025
AbstractPhil/sdxl-interpolated
Text-to-Image
•
Updated
Feb 10, 2025
AbstractPhil/sdxl-interpolated-nai-xl-11
Text-to-Image
•
Updated
Feb 9, 2025
Previous
1
...
4
5
6
Next