开发者

How to assign certain samples as negative samples when using sampled_softmax_loss in tensorflow?

开发者 https://www.devze.com 2022-12-07 18:16 出处:网络
The API of sampled_softmax_lossgoes like: tf.nn.sampled_softmax_loss( weights, biases, labels, inputs,

The API of sampled_softmax_loss goes like:

tf.nn.sampled_softmax_loss(
    weights, 
    biases, 
    labels, 
    inputs, 
    num_sampled, 
    num_classes, 
    num_true=1,
    sampled_values=None, 
    ...
)

I've noticed that arg sampled_values is the one which开发者_StackOverflow中文版 determines what negatives samples we take and it's returned by a _candidate_sampler function like tf.random.fixed_unigram_candidate_sampler.

And in tf.random.fixed_unigram_candidate_sampler we can decide the probability of each sample chosen as negative sample.

But my question is, how to assign certain sample as negative sample on purpose?

For instance, in the case of recommender system, I'd like to add some hard negative sample to the model. So I want the hard negative samples been chosen for sure, not by probability like in _candidate_sampler function


One possible way to achieve this is to use a custom _candidate_sampler function that always includes the desired hard negative samples, and assigns a high probability to them. For example:

# Define the hard negative samples
hard_negatives = [4, 5, 6]

# Define the custom candidate sampler
def custom_candidate_sampler(num_true, true_classes, num_sampled, unique, range_max, seed=None, name=None):
  # Include the hard negatives in the samples
  samples = hard_negatives + [0, 1, 2, 3]
  # Assign a high probability to the hard negatives
  weights = [0.9] * len(hard_negatives) + [0.1] * (len(samples) - len(hard_negatives))
  return samples, weights

# Use the custom candidate sampler in the sampled_softmax_loss function
sampled_softmax_loss(
  weights, 
  biases, 
  labels, 
  inputs, 
  num_sampled, 
  num_classes, 
  num_true=1,
  sampled_values=custom_candidate_sampler, 
  ...
)

This way, the hard negative samples will always be included in the negative samples used by the sampled_softmax_loss function, and will have a high probability of being chosen. You can adjust the probabilities assigned to the hard negatives and other samples as needed.


you need to understand that the sampler candidates function is only a remarks function and your question is right about how to create a negative sampler.

You don't need to create a negative sampler when you assigned a unique, the sampler is (sampled_candidates, true_expected_count, sampled_expected_count) hard negative is when you add contrast values to significant the candidates. In this way, you can have it with distributions.

  1. Random Uniform Candidates Sampler
  2. Candidate Sampling
  3. Sampled SoftMax

Simple: It is weight and bias are varies, and functions are the same.

import tensorflow as tf

weights = tf.zeros([4, 1])
biases = tf.zeros([4])
labels = tf.ones([4, 1])
inputs = tf.zeros([4, 1])
num_sampled = 1
num_classes = 1

true_classes = tf.ones([4, 4], dtype=tf.int64)
num_true = 4
num_sampled = 1
unique =True
range_max = 1

sampler = tf.random.uniform_candidate_sampler(
    true_classes,
    num_true,
    num_sampled,
    unique,
    range_max,
    seed=None,
    name=None
)

loss_fn = tf.nn.sampled_softmax_loss(
    weights,
    biases,
    labels,
    inputs,
    num_sampled,
    num_classes,
    num_true=1,
    sampled_values=sampler,
    remove_accidental_hits=True,
    seed=None,
    name='sampled_softmax_loss'
)
print( loss_fn )

Output: Value output as examples, 3 times running.

tf.Tensor([6.437752 6.437752 6.437752 6.437752], shape=(4,), dtype=float32)
tf.Tensor([6.437752 6.437752 6.437752 6.437752], shape=(4,), dtype=float32)
tf.Tensor([6.437752 6.437752 6.437752 6.437752], shape=(4,), dtype=float32)

Sample: Applications, coefficients value is environment feedbacks response.

temp = tf.random.normal([10], 1, 0.2, tf.float32)
temp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ])
temp = tf.nn.softmax(temp)
action = int(np.argmax(temp))   

Output: The SoftMax function creates the significant but remarks is possible by you add some patterns value as [ 0.1, 0.2, 0.3 ... ] or shared values that you know when training the samples.

tf.Tensor(
[5.35664947e-26 1.02519161e-34 4.47336880e-25 1.05499833e-13
 7.77301813e-01 1.63251559e-31 1.73154682e-19 1.13722112e-08
 2.22698176e-01 1.16543030e-19], shape=(10,), dtype=float64)
player and gap: -1.0
action: 4
...
tf.Tensor(
[1.45061225e-24 4.64149420e-32 1.42394889e-19 9.99999965e-01
 3.52535193e-08 1.52202337e-21 1.68052664e-40 2.72842143e-49
 1.52027117e-39 2.65319842e-20], shape=(10,), dtype=float64)
player and gap: -4.0
action: 3
...
tf.Tensor(
[1.44775387e-01 1.58734885e-36 4.45880581e-01 4.09344020e-01
 9.96034489e-16 9.89058968e-33 1.00848780e-35 1.13891565e-26
 1.02650193e-16 1.18694311e-08], shape=(10,), dtype=float64)

How to assign certain samples as negative samples when using sampled_softmax_loss in tensorflow?

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号