db0

op0

ca0

计算机网络(1) - IP地址

IP地址

IP地址根据其结构被分为A、B、C、D和E五类,其中A、B、C类是最常见的用于公共网络的地址类别。D类用于多播,E类用于实验和未来使用。

算法(5)- 比特运算

比特运算

比特运算(Bitwise operations)是直接对整数的二进制位进行操作的运算。在Python中,比特运算符可以用来执行按位与(AND)、按位或(OR)、按位异或(XOR)、按位非(NOT)、左移(左移位)和右移(右移位)等操作。

Brian Kernighan算法

Brian Kernighan算法是一种用于高效计算一个整数中二进制表示下1的个数的算法。这个算法的核心思想是利用位运算来快速减少计数的过程。算法基于这样一个观察:对于任意一个非零整数n,n和n-1进行按位与操作(n & (n - 1))的结果会将n的二进制表示中最右边的一个1变为0。

位运算的技巧

通过与1进行按位与操作可以取得最右位并且判断一个数是奇数还是偶数。通过与(n-1)进行按位与操作可以清零最低位的1。对于十进制整数 x,我们可以用 x & (1 << k) 来判断 x 二进制表示的第 k 位(最低位为第 0 位)是否为 1。

参考文献

https://leetcode.cn/problems/counting-bits/solutions/627418/bi-te-wei-ji-shu-by-leetcode-solution-0t1i/?envType=study-plan-v2&envId=leetcode-75

https://leetcode.cn/problems/minimum-flips-to-make-a-or-b-equal-to-c/solutions/101777/huo-yun-suan-de-zui-xiao-fan-zhuan-ci-shu-by-lee-2/?envType=study-plan-v2&envId=leetcode-75

算法(4)- 前缀树

前缀树

前缀树(Trie树),也称为字典树或单词查找树,是一种树形数据结构,专门用于高效存储和检索字符串集合中的词项。

前缀树是一种多叉树结构,其中每个节点代表一个字符串前缀,从根节点到任一节点的路径上的字符序列构成该节点对应的前缀。

前缀树的应用

搜索引擎使用前缀树来快速检索和匹配用户输入的查询词。当用户输入一个查询词时,搜索引擎可以从前缀树的根节点开始,根据查询词中的字符逐个向下遍历节点,直到找到匹配的单词或其前缀。

搜索引擎可以利用前缀树提供自动补全建议和拼写检查。由于前缀树存储了大量的单词和它们的前缀,搜索引擎可以快速地根据用户已经输入的部分单词(前缀)给出完整的单词建议,或者指出拼写错误并提供正确的拼写。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
class Trie:
def __init__(self):
self.children = [None] * 26
self.isEnd = False

def searchPrefix(self, prefix: str) -> "Trie":
node = self
for ch in prefix:
ch = ord(ch) - ord("a")
if not node.children[ch]:
return None
node = node.children[ch]
return node

def insert(self, word: str) -> None:
node = self
for ch in word:
ch = ord(ch) - ord("a")
if not node.children[ch]:
node.children[ch] = Trie()
node = node.children[ch]
node.isEnd = True

def search(self, word: str) -> bool:
node = self.searchPrefix(word)
return node is not None and node.isEnd

def startsWith(self, prefix: str) -> bool:
return self.searchPrefix(prefix) is not None

参考文献

链接:https://leetcode.cn/problems/implement-trie-prefix-tree/solutions/1/shi-xian-trie-qian-zhui-shu-by-leetcode-ti500/

算法(3)- 堆/优先级队列

堆是一种特殊的树形数据结构,通常使用数组来实现。堆具有以下特性:堆是一棵完全二叉树(Complete Binary Tree),即除了最后一层外,每一层都被填满,最后一层的节点都靠左排列。

最大堆(Max Heap):父节点的值总是大于或等于其子节点的值。

最小堆(Min Heap):父节点的值总是小于或等于其子节点的值。

优先级队列

优先级队列(Priority Queue)是一种特殊的队列,其中每个元素都有一个优先级。元素的出队顺序取决于其优先级,而不是进入队列的先后顺序。

深度学习(3) - 计算框架PyTorch

PyTorch

张量

张量是一种专门的数据结构,与数组和矩阵非常相似。张量类似于 NumPy 的 ndarrays,但张量可以在 GPU 或其他硬件加速器上运行,张量也针对自动微分进行了优化。

默认情况下,张量是在 CPU 上创建的。我们需要使用 .to 方法将张量显式地移动到 GPU(在检查 GPU 是否可用后)。请记住,跨设备复制大型张量在时间和内存方面可能代价高昂!

将结果存储到操作数中的操作称为就地操作。它们用 后缀表示。例如:x.copy(y)、x.t_() 将会改变 x。

数据加载

PyTorch 提供了两个数据基本类型:torch.utils.data.DataLoader 和 torch.utils.data.Dataset,它们使您可以使用预加载的数据集以及您自己的数据。

神经网络

torch.nn 命名空间提供了构建您自己的神经网络所需的所有构建块。PyTorch 中的每个模块都是 nn.Module 的子类。

自动微分

在训练神经网络时,最常用的算法是 反向传播。在此算法中,参数(模型权重)会根据损失函数相对于给定参数的 梯度 进行调整。

为了计算这些梯度,PyTorch 有一个内置的微分引擎,称为 torch.autograd。它支持任何计算图的梯度自动计算。当我们需要计算损失函数相对于这些变量的梯度,我们设置了这些张量的requiresgrad 属性。可以在创建张量时设置requires_grad的值,或者之后使用x.requires_grad(True)方法。要计算这些导数,我们调用loss.backward(),然后从w.grad和b.grad中检索值。

要阻止一个张量被跟踪历史,可以调用.detach()方法将其与计算历史分离,并阻止它未来的计算记录被跟踪。为了防止跟踪历史记录(和使用内存),可以将代码块包装在 with torch.no_grad(): 中。

优化

在训练循环中,优化分三个步骤进行:
调用optimizer.zero_grad()重置模型参数的梯度。默认情况下,梯度会累加;为了防止重复计数,我们在每次迭代中显式地将其归零。

使用loss.backward()反向传播预测损失。PyTorch 将损失相对于每个参数的梯度存储起来。

在获得梯度后,我们调用optimizer.step()根据反向传播中收集的梯度调整参数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
# Set the model to training mode - important for batch normalization and dropout layers
# Unnecessary in this situation but added for best practices
model.train()
for batch, (X, y) in enumerate(dataloader):
# Compute prediction and loss
pred = model(X)
loss = loss_fn(pred, y)

# Backpropagation
loss.backward()
optimizer.step()
optimizer.zero_grad()

if batch % 100 == 0:
loss, current = loss.item(), batch * batch_size + len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")


def test_loop(dataloader, model, loss_fn):
# Set the model to evaluation mode - important for batch normalization and dropout layers
# Unnecessary in this situation but added for best practices
model.eval()
size = len(dataloader.dataset)
num_batches = len(dataloader)
test_loss, correct = 0, 0

# Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode
# also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=True
with torch.no_grad():
for X, y in dataloader:
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()

test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

保存和加载模型

保存和加载模型权重

PyTorch 模型将学习到的参数存储在内部状态字典中,称为 state_dict。这些可以通过 torch.save 方法持久化。

1
2
model = models.vgg16(weights='IMAGENET1K_V1')
torch.save(model.state_dict(), 'model_weights.pth')

要加载模型权重,您需要首先创建一个相同模型的实例,然后使用 load_state_dict() 方法加载参数。

1
2
3
model = models.vgg16() # we do not specify ``weights``, i.e. create untrained model
model.load_state_dict(torch.load('model_weights.pth', weights_only=True))
model.eval()

注意请务必在推理之前调用 model.eval() 方法,以将 dropout 和批归一化层设置为评估模式。如果不这样做,将产生不一致的推理结果。

保存和加载具有形状的模型

1
2
torch.save(model, 'model.pth')
model = torch.load('model.pth', weights_only=False),

参考文献

https://pytorch.ac.cn/tutorials/beginner/basics/tensor_tutorial.html

Recommender Systems

Recommender Systems

Collaborative Filtering

In a broad sense, it is the process of filtering for information or patterns using techniques involving collaboration among multiple users, agents, and data sources. Overall, CF techniques can be categorized into: memory-based CF, model-based CF, and their hybrid.

Matrix Factorization

Matrix factorization is a class of collaborative filtering models. Specifically, the model factorizes the user-item interaction matrix (e.g., rating matrix) into the product of two lower-rank matrices, capturing the low-rank structure of the user-item interactions.

Let $\mathbf{R} \in \mathbb{R}^{m \times n}$ denote the interaction matrix with m users and n items and the values of $\mathbf{R}$
represent explicit ratings. The user-item interaction will be factorized into a user latent matrix $\mathbf{P} \in \mathbb{R}^{m \times k}$ and an item latent matrix $\mathbf{Q} \in \mathbb{R}^{n \times k}$. For a given item
i, the elements of $\mathbf{q}_i$
measure the extent to which the item possesses those characteristics such as the genres and languages of a movie. For a given user u
, the elements of $\mathbf{p}_u$
measure the extent of interest the user has in items’ corresponding characteristics.
The predicted ratings can be estimated by

One major problem of this prediction rule is that users/items biases can not be modeled, so

\underset{\mathbf{P}, \mathbf{Q}, b}{\mathrm{argmin}} \sum{(u, i) \in \mathcal{K}} | \mathbf{R}{ui} -
\hat{\mathbf{R}}_{ui} |^2 + \lambda (| \mathbf{P} |^2_F + | \mathbf{Q}
|^2_F + b_u^2 + b_i^2 )

h(\mathbf{R}{*i}) = f(\mathbf{W} \cdot g(\mathbf{V} \mathbf{R}{*i} + \mu) + b)

\underset{\mathbf{W},\mathbf{V},\mu, b}{\mathrm{argmin}} \sum{i=1}^M{\parallel \mathbf{R}{i} - h(\mathbf{R}_{i})\parallel_{\mathcal{O}}^2} +\lambda(| \mathbf{W} |_F^2 + | \mathbf{V}|_F^2)

\begin{split}\begin{aligned}
\textrm{BPR-OPT} : &= \ln p(\Theta \mid >u) \
& \propto \ln p(>_u \mid \Theta) p(\Theta) \
&= \ln \prod
{(u, i, j \in D)} \sigma(\hat{y}{ui} - \hat{y}{uj}) p(\Theta) \
&= \sum{(u, i, j \in D)} \ln \sigma(\hat{y}{ui} - \hat{y}{uj}) + \ln p(\Theta) \
&= \sum
{(u, i, j \in D)} \ln \sigma(\hat{y}{ui} - \hat{y}{uj}) - \lambda_\Theta |\Theta |^2
\end{aligned}\end{split}

\sum{(u, i, j \in D)} \max( m - \hat{y}{ui} + \hat{y}_{uj}, 0)

$$
where m
is the safety margin size.

NeuMF

This model leverages the flexibility and non-linearity of neural networks to replace dot products of matrix factorization, aiming at enhancing the model expressiveness. In specific, this model is structured with two subnetworks including generalized matrix factorization (GMF) and MLP and models the interactions from two pathways instead of simple dot products. The outputs of these two networks are concatenated for the final prediction scores calculation.

Sequence-Aware Recommender Systems

Caser, short for convolutional sequence embedding recommendation model, adopts convolutional neural networks capture the dynamic pattern influences of users’ recent activities. The main component of Caser consists of a horizontal convolutional network and a vertical convolutional network, aiming to uncover the union-level and point-level sequence patterns, respectively. The goal of Caser is to recommend item by considering user general tastes as well as short-term intention.

Factorization Machines

The strengths of factorization machines over the linear regression and matrix factorization are: (1) it can model
$\chi$-way variable interactions, where $\chi$
is the number of polynomial order and is usually set to two. (2) A fast optimization algorithm associated with factorization machines can reduce the polynomial computation time to linear complexity, making it extremely efficient especially for high dimensional sparse inputs.

references

https://d2l.ai/chapter_recommender-systems/index.html

RL1