Absortio

Email → Summary → Bookmark → Email

Per page:

GitHub - stenh0use/docker-machine-for-mac: Docker Machine for Mac - an alternative to Docker for Mac

Jan 28, 2022 21:52 • github.com GitHub

Docker Machine for Mac - an alternative to Docker for Mac - GitHub - stenh0use/docker-machine-for-mac: Docker Machine for Mac - an alternative to Docker for Mac

The Paper: “A Deep Probabilistic Model for Customer Lifetime Value Prediction”

Jan 28, 2022 08:15 • towardsdatascience.com Towards Data Science

Predicting a customer’s lifetime value (LTV) can be quite a challenging task. Wang, Liu and Miao propose using a neural network with a mixture loss to handle the intricacies of churn and lifetime…

🔥 🚀 30 Laravel Eloquent Tips

Jan 27, 2022 18:34 • hamidafghan.me Hamid Afghan - Software Developer

This is a shortlist of the amazing hidden Laravel eloquent 30 tips that make the code go on smoothly.

Miller 6.0.0 Documentation

Jan 27, 2022 08:11 • miller.readthedocs.io

The big picture: Even well into the 21st century, our world is full of text-formatted data like CSV. Google CSV memes, for example. We need tooling to thrive in this world, nimbly manipulating data which is in CSVs. And we need tooling to move beyond CSV, to be able to pull data out and into other storage and processing systems. Miller is designed for both these goals.

WebGPT: Browser-assisted question-answering with human feedback

Jan 24, 2022 12:26 • arxiv.org arXiv.org

We fine-tune GPT-3 to answer long-form questions using a text-based web-browsing environment, which allows the model to search and navigate the web. By setting up the task so that it can be performed by humans, we are able to train models on the task using imitation learning, and then optimize answer quality with human feedback. To make human evaluation of factual accuracy easier, models must collect references while browsing in support of their answers. We train and evaluate our models on ELI5, a dataset of questions asked by Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior cloning, and then performing rejection sampling against a reward model trained to predict human preferences. This model's answers are preferred by humans 56% of the time to those of our human demonstrators, and 69% of the time to the highest-voted answer from Reddit.