-
None
- China
- xzhewei.com
Starred repositories
cube studio开源云原生一站式机器学习/深度学习/大模型AI平台,支持sso登录,多租户,大数据平台对接,notebook在线开发,拖拉拽任务流pipeline编排,多机多卡分布式训练,超参搜索,推理服务VGPU,边缘计算,serverless,标注平台,自动化标注,数据集管理,大模型微调,vllm大模型推理,llmops,私有知识库,AI模型应用商店,支持模型一键开发/推理/微调,…
OmniXAI: A Library for eXplainable AI
Visual Blocks for ML is a Google visual programming framework that lets you create ML pipelines in a no-code graph editor. You – and your users – can quickly prototype workflows by connecting drag-…
Tools for converting Label Studio annotations into common dataset formats
An Open-Source Package for Textual Adversarial Attack.
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
This repo contains artwork/logos for trusted ai projects.
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
Pedestrian detection python tool for Detectron framework
🦉 ML Experiments and Data Management with Git
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
An awesome list of papers on privacy attacks against machine learning
Interpretability and explainability of data and machine learning models
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
From Handcrafted to Deep Features for Pedestrian Detection: A Survey (TPAMI 2021)
软考资料, 收集于网络。目前包括:系统架构师、项目管理师、软件设计师备考资料
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
A collection of infrastructure and tools for research in neural network interpretability.
Keras implementation of "One pixel attack for fooling deep neural networks" using differential evolution on Cifar10 and ImageNet
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)